Versatile C++ game scraper: Skyscraper
-
@stephanepare Goodtools are absolite. It's best to go with No-Intro instead.
Google is your friend here if you are not part of a larger community cough private trackers cough
Just search for "No-Intro 2017"
-
I got my hands on an EmuMovies / gamesdbase API key now (Are they the same? It's a bit confusing). Spend a bit of time checking things out from their demo vb project, and I think I get the gist of the implementation. I will write my own implementation of it and it will then be added to Skyscraper as a scraping module.
-
@muldjord Fantastic, looking forward to this :)
-
@muldjord - Would it be possible to redirect the localdb folder from [homefolder]/.skyscraper/ to [install folder]/.skyscraper/ ? I have copied the install files to usb hdd as I left Skyscraper running, scripted to scrape my entire collection over the last 3 or 4 days and I came back to it today and realised the SD card was ful (64gb card) So i'm trying to transfer everything voer to my 1Tb hdd.
I had a weird error where emulationstation was crashing on boot but I think it was to do with a full card, i'm going to have to try copy contents to usb hdd and see how it goes :)
-
@LocVez Yes, just use '-d [dbs folder]'. Check the readme :) Just remember that using '-d' points to the platform dbs folder you are gonna be scraping with localdb. So, for instance, if you wanted to scrape 'nes' with a custom local nes db path, you would put in '-d [whatever]/.skyscraper/dbs/nes'.
EDIT: To elaborate: You can't change the Skyscraper folder, but if you want it to be seamless, you can always just create a symbolic link from ~/.skyscraper to wherever your usb hdd is mounted. Or, simply mount your usb hdd at ~/.skyscraper. :)
-
@muldjord Nice one, thanks! :) And yes, I will now go read the readme <blush>
-
Sorry @muldjord another suggestion or two... - Can we have a switch to set a timeout for scraping each rome file? I've noticed a handful of occasions where the scraping seems to take 10 minutes for a particular few files and I'm unsure if this is a fault on the scraper or the scrapee side but if we could make it so that if it takes longer than 10 seconds or so, skip and move on that would be great (If the user could manually set the timeout I mean)
Also - could we list the database and platform we are scraping on the text that says xxxx/xxxx --- Pass 1, Pass 2 ------ <rom name> etc etc so that in the event of having a "stuck" scrape it can be cancelled and that database can be ommited from the script?
I have the script setup in the following way
Skyscraper -p megadrive -s gamesdatabase --unattend
Skyscraper -p megadrive -s mobygames --unattend
etc, etcBut it's impossible to know which database is causing issues :(
Thanks again!
-
-
Sounds really odd. I have a 30 seconds timeout on the network connections (tested and working well), so it has to be a problem elsewhere, perhaps on your system. I've never had my scraper wait for 10 minutes while scraping (and I've scraped A LOT!). Maybe it could be related to saving data to the SD card. This is not something that can be fixed as it's system related. If you can investigate a bit further it might help, but for the moment I am going to assume it's a problem with your system.
-
I've wanted this myself, so I'll think about it. :) The platform is already part of the output, but I could add an output line about the current scraping module.
EDIT: Btw, you can actually figure out where it stopped. Just look at the 'skipped*' files. The one that has been changed last, is the one where it was stopped.
EDIT2: Another think I just thought of. If you have been scraping a lot, it might also be that some of the sites have started throttling you down. That would result in transfers taking a loooong time, but not be a timeout as such.
Have you noticed if it's any particular scraping module that is slow?
EDIT3: 'Scraper' is now included in the output per entry but only when using the '--verbose' option. It is redundant information, so I didn't want it per default. I think it works well when it's only shown when using '--verbose'. That's the whole point of verbose. Will be in 1.8.3.
-
-
Thanks @muldjord , I've mapped the .skyscraper folder to my usb hdd but it was doing this with my sd card as well as the usb hdd. As suggested it is likely a website throttling the connection or refusing. I did notice tonight when I shut it down that the "gamesdatabase" website had banned me again so I wonder if it were that, at the moment i'm running a system one scraper at a time to check all ok.
I will eagerly await the addition to verbose :)
Note - the platform is only part of the output if it sucessfully scrapes, if , in my case, it doesn't find anything, and it's taking 10 mins to scrape, it doesn't display this information. Thinking more about it, taking such a long time to scrape and returning "no results" more than likely does indicate a ban from the scraper website..... Looking forward to the emumovies addition :D
-
Just added a check for "bad scraping runs" which basically means that Skyscraper will quit if the first 30 files are missed. This indicates that the scraping module that is being used doesn't support the platform. Will be in 1.8.3.
-
@muldjord ScreenScraper has a database containing media for many different regions. They usually also store the hashes for the corresponding roms with a tag about its respective region.
I assume skyscrsper is just grabing the very first media type ot finds instead of basing it on the rom's respective region/country. Could you add that feature so we get the "correct" media if available, otherwise maybe following some preferences? -
@paradadf I would like to do that, but I must admit that it's a lot of work for something I don't need myself. So unless someone else implements it in a patch and sends it to me, it won't happen I'm afraid.
When using 'screenscraper' Skyscraper always looks for the 'wor' or 'us' or whatever they are called. If it doesn't find those, it picks the next one in line as I recall.
-
@muldjord understood, thanks!
-
Hi guys, I am sad to inform that Skyscraper has been discontinued effective immediately. I have been contacted by sources about the nature of the scrapings themselves. For that reason I no longer wish to pursue this project as I have no intention of being an inconvenience to the websites or authors of the information collected by Skyscraper.
Thank you for all of your feedback and support.
-
Sad to hear. I understand that some websites fear the traffic that your tool might produce, but isnt that the case with any other scraper? Why collecting data if not using it?
Anyway, its your program and your decision ofcourse.
-
-
that's really a bummer. the time and energy you must have invested in making this :(
all metadata and cover art..there has to be a better way to store, manage, combine, distribute it.. i wonder if it's considered public domain, maybe the internet archive would host such a project
-
Maybe im wrong about the traffic. It could be a copyright thing aswell.
I found this the best scraper around cause it works on pi and it can combine data from various sources. And the thing that it saves data local saves traffic if you need to rescrape.
It seems @muldjord has enough and strong reasons to pull the plug. Anyway thanks for this great tool. (Would have been a nice addition for retropie-setup with a small gui like sselphs scraper). :(
-
Stay tuned for news. Skyscraper might (MIGHT!) be online again soon'ish. But in a bit of a cut-down state I am afraid... More info when I get through all of the paperwork.
-
Great news. Regardless of if it comes back, hopefully you can share some details as to what exactly happened that cause you to pull it. At a minimum it would be useful information for the developers of other scrapers like @sselph so that they don't run into the same issues.
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.