Versatile C++ game scraper: Skyscraper
-
@dorkvader I didn't see it anywhere. Just see the @ symbol and your name. If you actually see mine, you can just send me an email though. I'm not sure what my profile looks like to other people either. :)
-
@muldjord Thanks for the awesome tool!
So, I’ve run into an issue scraping my collection with screenscraper.fr as the source. My ROMs are all compressed as individual 7z files (to save space), however Skyscraper is trying to match the SHA1/MD5s of the 7z files themselves, instead of the ROM they contain. Obviously this won’t wok, so it’s failing to match anything.
Most other tools will actually uncompress the file and get the hash of the resulting file(s), or even pull the hash out of the header for supported formats (ZIP files have CRC hashes stored in the header for each file in the archive; 7z stores MD5 hashes in the header).
Can this be implemented in Skyscraper? I belive the libarchive library has support for 7z, so it shouldn’t be hard to implement. Alternatively, Skyscraper could call the external 7z binary with the “7z h” option and read the hashes from stdin (this is a very hacky way to do it, but would be easy to implement).
Thanks for all the great work!
Edit: If this is too much work, how about adding a command line option to override the SHA1 and MD5 for individual ROMs? That way I could wrap Skyscraper in a custom Python script that would traverse a directory and for each file decompress it, generate SHA1 and MD5 hashes then call Skyscraper. Here’s some Python like pseudocode to illustrate:
#!/bin/pseudocode for file in “/path/to/compressed/roms”: run(“7z e file --stdout”) rom = run.STDIN rom_md5 = hash.MD5(rom) rom_sha1 = hash.SHA1(rom) run(“Skyscraper -s screenscraper -p snes -sha1 rom_sha1 -md5 rom_md5 file”) done
I could write a working script that does the above in basically no time. The script shouldn’t add more than 1 second per ROM to the scraping times (compared to running Skyscraper by itself on uncompressed files), which I could live with.
So yeah, if adding native compression isn’t a priority, adding hash override options would be an acceptable alternative I could live with. :)
-
EDIT: I just thought about the obvious solution to the below mentioned issues. I can just decompress to ram.
@timb Yes, this has been requested quite a few times actually. If I did this, it would be implemented with an '--unpack' option that would enable the decompression functionality. And it would work on zip and 7z files. The only issue I see is the bashing I would be doing on the SD card. Unpacking each rom to a temporary file and subsequently deleting the rom again, sometimes thousands of times in a row... That's just a very bad idea if you want to keep your SD card alive for more than a few weeks. SD cards don't spread around its writes like an SSD does. So this is a real concern.
With that said, I could add it, and make sure to put up a red warning text when using it. At least people would be aware of the issue and can't blame me when their setup goes up in smoke. ;)
-
@used2berx OK, my email is now visible in my profile. You just have to enable it to be seen in through profile edit. It's like the snake was at my feet ready to strike, but all be damned if I could see it. I did finish the Vectrex box art, btw.
-
Yes, decompressing to ram is the way to go. That’s what the pseudocode in my last post essentially did. (It ran 7z with the —stdout option, which of course decompresses to stdout, which is read with a pipe into the pseudocode process as a data object so it never touches the SD card.) I use this method with a Python script I wrote that sorts and verifies ROMs. Obviously you won’t have to worry about piping from an external process, since you can just use libarchive (or another library) to implement the decompression natively, but decompressing into memory is the way to go.
That said, decompressing to a temp file can still work if you have to. Simply do it to /dev/shm, which is on a tmpfs (ramdisk). Or do it to /tmp with a warning that includes instructions on how to enable mounting /tmp as tmpfs (it’s a single systemctl enable command). Obviously both of these options are Linux specific, but you could always implement fall back code for other platforms if you’re worried about that sort of thing.
As a side note, I’m not super concerned about wearing out an SD card. Modern cards have wear leveling algorithms built into their controllers, just like SSDs, so they should be fairly robust. If they do wear out, they’re cheap and plentiful. I do embedded development (hardware and software) for a living and have never seen an SD card die from wear. (That said, for datalogging applications I started using the F2FS filesystem instead of ext4 last year; I’ve been so pleased with it I’ve started using it on most of the embedded Linux systems I work with, including all my personal SBCs. It’s a log-structured file system designed specifically for NAND flash storage helps reduce wear, increase speed and improved reliability, especially when it comes to a lot of small read-modify-write operations.)
-
Thank you for the detailed suggestions. They all make sense to me. I think I'll go with libarchive, it's relatively simple to write a wrapper for it. I'll figure something out. My main concern with using libarchive is that it complicates the installation as it would obviously require libarchive as a dependency. Might not seem like a big deal, but I absolutely despise build systems such as autotools and cmake and try to avoid them at all costs. So I'll try to see if I can make it work in a simple manner with Qt's qmake instead. Might not be as robust, but at least I won't have to spend the next 2 weeks pulling my hair out of my scalp while fighting the unbelievably overcomplicated has-to-do-everything-to-the-point-that-only-absolute-experts-get-it-build systems. Oh yeah, we better check for that fortran compiler, that's important.
EDIT: You know what, I'll look into just using Qt's QProcess and use 7z to stdout which is returned to my QProcess.
-
Most distros have libarchive available as a package (Debian/Raspbian: apt install libarchive libarchive-dev), so can’t you just dynamically link to the library the package installs?
I can’t see how that would be any worse than the dependency of having to install 7z (p7zip) and it would work much, much better than reading 7z’s STDOUT. (I’ve written a few she’ll scripts around p7zip. While it is stable, it’s very clear that it’s a Linux console application written by a Windows developer, which it was! I had to write a lot of “logic code” around it in order to get it to behave like a normal *nix shell utility.)
I’ve been slowly converting my hundreds of utility shell scripts into Python and of those about a dozen call p7zip, however I’ve been going back and rewriting them to use PyArchive (which is a Python module for libarchive) instead. The ones I’ve done so far ended up much simpler than calling 7z as a subprocesses (one script went from 50 lines to 15).
If it were me, I’d go with a native libarchive implementation. (I actually thought about making an attempt at implementing it, but I don’t really do much work with C++, so it would end up more like C and be sort of hacky.)
-
It's on master now in a quick first draft. Please try it out if you get the time. The option is '--unpack'. It will unpack and checksum a single file inside a 7z or zip file. If more than 1 file is found inside the compressed file, it falls back to doing the checksum calculation from the base file. Let me know how it goes if you do check it out.
EDIT: I could dynamically link. The issue here is that I'd just add it manually to the flags without knowing actual locations for the specific system. If for some reason that doesn't exist/work on the system, it breaks the compilation. And I do not want to get into that territory. I'm ok with the current solution. If the user doesn't have p7zip, it just doesn't work. It still compiles fine. I'll add some error checking aswell.
-
Wow, that was fast! Okay, I’ll download it tonight and do my best to break it! :D
I’m just thinking out loud here; as an alternative in case the 7zip solution ends up not working reliably for some reason: As for dynamically linking, you could always just let the user pass the location of the library (—with-libarchive=/use/local/include) to the linker. Then just use a define or something around any code that uses the library, so it doesn’t compile if the location isn’t provided. It wouldn’t be hard to add some logic to the setup shell script either that searches for libarchive and informs the user if it can’t be found. (Using a normal makefile based build system anyway; I’m not sure about the build system Qt uses.) It’s not ideal, and sort of hacky, but would be straight forward to do.
-
@muldjord
Okay, so it compiled and installed fine, but immediately I ran into a bit of an issue. This is the output of the first two NES games it tries to scrape:#1/733 (T1) Pass 1 Pass 2 Pass 3 Pass 4 ---- Game '10-Yard Fight (USA, Europe)' not found :( ---- Debug output: Tried with sha1, md5, filename: '10-Yard%20Fight%20%28USA%2C%20Europe%29.7z', 'DA39A3EE5E6B4B0D3255BFEF95601890AFD80709', 'D41D8CD98F00B204E9800998ECF8427E' Platform: nes Tried with sha1: DA39A3EE5E6B4B0D3255BFEF95601890AFD80709 Platform: nes Tried with md5: D41D8CD98F00B204E9800998ECF8427E Platform: nes Tried with name: 10-Yard%20Fight%20%28USA%2C%20Europe%29.7z Platform: nes Elapsed time: 00:00:02 Estimated time left: 00:35:22 #2/733 (T1) Pass 1 Pass 2 Pass 3 Pass 4 ---- Game '1942 (Japan, USA)' not found :( ---- Debug output: Tried with sha1, md5, filename: '1942%20%28Japan%2C%20USA%29.7z', 'DA39A3EE5E6B4B0D3255BFEF95601890AFD80709', 'D41D8CD98F00B204E9800998ECF8427E' Platform: nes Tried with sha1: DA39A3EE5E6B4B0D3255BFEF95601890AFD80709 Platform: nes Tried with md5: D41D8CD98F00B204E9800998ECF8427E Platform: nes Tried with name: 1942%20%28Japan%2C%20USA%29.7z Platform: nes Elapsed time: 00:00:04 Estimated time left: 00:29:06
Notice the MD5/SHA1 sums are the same on both? I know that's not correct. Here's the MD5 on the first title (extracted):
timb-mba:~ timb$ 7z e "~/10-Yard Fight (USA, Europe).7z" -so | md5 8caac792d5567da81e6846dbda833a57
(I set 7z to extract directly to STDOUT and piped it into
md5
so it would work like it does in your program.)Now, after scraping these first two files it starts to scrape fine:
#3/733 (T1) Pass 1 ---- Game '1943 - The Battle of Midway (USA)' found! :) ---- ~SNIP~ Debug output: Tried with sha1, md5, filename: '1943%20-%20The%20Battle%20of%20Midway%20%28USA%29.7z', '443D235FBDD0E0B3ADB1DCF18C6CAA7ECEEC8BEE', 'DEEAF9D51FC3F13F11F8E1A65553061A' Platform: nes Elapsed time: 00:00:18 Estimated time left: 01:15:41
So, it seems like the program is somehow hashing bad data at first? I took a cursory glance at the source but nothing instantly jumps out at me.
Also, something else I noticed: It seems the local cache lookup is still doing a SHA1 on the packed 7z file, instead of the unpacked file it contains. This means that it's re-downloading all the assets from screenscrapers, instead of using the assets it previously downloaded when I scraped the extracted files. (I know this is a quick first pass of the feature, I just wanted to point it out.)
-
@timb I'll look into this when I get the time. And yes, it will keep identifying the files from the sha1 of the actual file on the disk. That is not gonna change. You should see this as a tool to help you identify custom made 7z or zip files as they often differ from the ones in the screenscraper database. So I expect the user to have "completed" their files before they start scraping. Both solutions have caveats obviously. :)
-
@muldjord
Personally, I think it makes more sense to cache based on the hash of the actual ROM file itself and not the hash of the compressed file that contains it (because the hash of a 7z, ZIP or RAR file may change, even though the data it contains does not change).That said, you’re right, it is a lot simpler to just cache based on the base file’s hash. I can also understand why you wouldn’t want to spread the 7z unpack hack around further in the code. So I totally see your point. :)
Let me know if you need any more debug data on the incorrect hash bug.
-
@timb Yes, I do see your point in that aswell if I change it to be an option for '--nounpack' instead (so it would be default to always unpack and id from the files inside zips in the local cache). But it would mean that all users' current localdb's would break in an instant... So I'd need to convert the sha1's on the fly from zip sha1's to contained file sha1's in order for that not happen which means I'd need to sha1 twice for a long time in order to compare the sha1 from the localdb. Ugh... :D
-
@muldjord
Yeah, that’s kind of a mess. Though, there is a potential solution: I’m about halfway through writing a quick and dirty Python script that parses the database XML file and changes the SHA1 keys from the unpacked ROM to that of the packed 7z file. It could easily be modified to do the reverse. You could prompt the user to run it during an upgrade. Obviously it wouldn’t change entries where the original 7z file no longer exists, but the script could always trim those from the DB and run the Skyscraper cleanup option. Hmm, yeah, it seems totally doable.(If you do eventually decide to go this route, I’d be happy to write a real production ready version of such a script with proper error handling and such. I could also do it in Bash (ugh) if you didn’t want any dependencies.)
-
@timb Thank you, I'll give it some thought. I'm leaning towards a solution that does it automatically for each localdb entry when it is requested. That is, if I decide to go ahead with it at all. :) I'm also quite happy with the solution as it is if I get that weird bug fixed you posted about earlier.
-
@muldjord
Speaking of the weird bug, it gets weirder!Total number of games: 733 Successfully scraped games: 661 Skipped games: 72 (Filenames saved to '~/.skyscraper/skipped-screenscraper.txt')
Now, these same 733 games scraped just fine as extracted ROMs. If I use 7z on the Pi to extract these problem files (the same 7z Skyscraper is using) they extract fine and show the correct hashes. So I know it's not the archive or ROMs that are the problem.
All 72 of the skipped files are returning the same hash (that I posted earlier, it doesn't appear to change). Weird, right?
-
@timb I have a guess. Maybe it's because it checksums "no data" (or an error message) because of the way I use the QProcess read or something. It's very hacked. Haven't had time to look properly into this yet, I'm just guessing without having entirely checked your descriptions yet.
-
@muldjord
I had similar thoughts earlier. An easy way to see if it’s checksumming an error message from 7zip is to entirely suppress all output from it. We can do that with the following flags:7z e -y -bd -bso0 -bse0 -bsp0 -so
That essentially completely disables console, error and progress output and also answers ‘yes’ to any prompts. It’s the closest you can get to a “-qq” flag. I’ll add that to the source this afternoon, recompile and see if it makes any difference.
No rush on this, by the way. I appreciate you implementing it at all. :)
-
Please
git pull
and try it again. I've improved it quite a bit and added further error checking. I'm interested in knowing if you still get those weird same-sha1 errors. Also, I've added a 20 meg limit to using the unpacking feature to try and avoid running into mem limitations on the pi. Hopefully this is temporary as I would like it to read chunks instead, but so far I haven't gotten that to work reliably. So it take up the amount of ram the rom takes up on disk to calculate the checksums. And if you run several threads AND the compositor is working with images that spells trouble. -
Whatever you did seems to have fixed it! I’ll let you know how the other platforms go (I’ve only tried NES so far). I’m just putting the finishing touches on my script that swaps the SHA1 hashes in the DB files; I want to run it first before scraping the other platforms.
A 20MiB limit seems sensible for now. I don’t think any of my ROMs come close to that. (I keep my N64 stuff uncompressed as the emulator doesn’t support on the fly decompression.)
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.