Versatile C++ game scraper: Skyscraper
-
@muldjord - Would it be possible to redirect the localdb folder from [homefolder]/.skyscraper/ to [install folder]/.skyscraper/ ? I have copied the install files to usb hdd as I left Skyscraper running, scripted to scrape my entire collection over the last 3 or 4 days and I came back to it today and realised the SD card was ful (64gb card) So i'm trying to transfer everything voer to my 1Tb hdd.
I had a weird error where emulationstation was crashing on boot but I think it was to do with a full card, i'm going to have to try copy contents to usb hdd and see how it goes :)
-
@LocVez Yes, just use '-d [dbs folder]'. Check the readme :) Just remember that using '-d' points to the platform dbs folder you are gonna be scraping with localdb. So, for instance, if you wanted to scrape 'nes' with a custom local nes db path, you would put in '-d [whatever]/.skyscraper/dbs/nes'.
EDIT: To elaborate: You can't change the Skyscraper folder, but if you want it to be seamless, you can always just create a symbolic link from ~/.skyscraper to wherever your usb hdd is mounted. Or, simply mount your usb hdd at ~/.skyscraper. :)
-
@muldjord Nice one, thanks! :) And yes, I will now go read the readme <blush>
-
Sorry @muldjord another suggestion or two... - Can we have a switch to set a timeout for scraping each rome file? I've noticed a handful of occasions where the scraping seems to take 10 minutes for a particular few files and I'm unsure if this is a fault on the scraper or the scrapee side but if we could make it so that if it takes longer than 10 seconds or so, skip and move on that would be great (If the user could manually set the timeout I mean)
Also - could we list the database and platform we are scraping on the text that says xxxx/xxxx --- Pass 1, Pass 2 ------ <rom name> etc etc so that in the event of having a "stuck" scrape it can be cancelled and that database can be ommited from the script?
I have the script setup in the following way
Skyscraper -p megadrive -s gamesdatabase --unattend
Skyscraper -p megadrive -s mobygames --unattend
etc, etcBut it's impossible to know which database is causing issues :(
Thanks again!
-
-
Sounds really odd. I have a 30 seconds timeout on the network connections (tested and working well), so it has to be a problem elsewhere, perhaps on your system. I've never had my scraper wait for 10 minutes while scraping (and I've scraped A LOT!). Maybe it could be related to saving data to the SD card. This is not something that can be fixed as it's system related. If you can investigate a bit further it might help, but for the moment I am going to assume it's a problem with your system.
-
I've wanted this myself, so I'll think about it. :) The platform is already part of the output, but I could add an output line about the current scraping module.
EDIT: Btw, you can actually figure out where it stopped. Just look at the 'skipped*' files. The one that has been changed last, is the one where it was stopped.
EDIT2: Another think I just thought of. If you have been scraping a lot, it might also be that some of the sites have started throttling you down. That would result in transfers taking a loooong time, but not be a timeout as such.
Have you noticed if it's any particular scraping module that is slow?
EDIT3: 'Scraper' is now included in the output per entry but only when using the '--verbose' option. It is redundant information, so I didn't want it per default. I think it works well when it's only shown when using '--verbose'. That's the whole point of verbose. Will be in 1.8.3.
-
-
Thanks @muldjord , I've mapped the .skyscraper folder to my usb hdd but it was doing this with my sd card as well as the usb hdd. As suggested it is likely a website throttling the connection or refusing. I did notice tonight when I shut it down that the "gamesdatabase" website had banned me again so I wonder if it were that, at the moment i'm running a system one scraper at a time to check all ok.
I will eagerly await the addition to verbose :)
Note - the platform is only part of the output if it sucessfully scrapes, if , in my case, it doesn't find anything, and it's taking 10 mins to scrape, it doesn't display this information. Thinking more about it, taking such a long time to scrape and returning "no results" more than likely does indicate a ban from the scraper website..... Looking forward to the emumovies addition :D
-
Just added a check for "bad scraping runs" which basically means that Skyscraper will quit if the first 30 files are missed. This indicates that the scraping module that is being used doesn't support the platform. Will be in 1.8.3.
-
@muldjord ScreenScraper has a database containing media for many different regions. They usually also store the hashes for the corresponding roms with a tag about its respective region.
I assume skyscrsper is just grabing the very first media type ot finds instead of basing it on the rom's respective region/country. Could you add that feature so we get the "correct" media if available, otherwise maybe following some preferences? -
@paradadf I would like to do that, but I must admit that it's a lot of work for something I don't need myself. So unless someone else implements it in a patch and sends it to me, it won't happen I'm afraid.
When using 'screenscraper' Skyscraper always looks for the 'wor' or 'us' or whatever they are called. If it doesn't find those, it picks the next one in line as I recall.
-
@muldjord understood, thanks!
-
Hi guys, I am sad to inform that Skyscraper has been discontinued effective immediately. I have been contacted by sources about the nature of the scrapings themselves. For that reason I no longer wish to pursue this project as I have no intention of being an inconvenience to the websites or authors of the information collected by Skyscraper.
Thank you for all of your feedback and support.
-
Sad to hear. I understand that some websites fear the traffic that your tool might produce, but isnt that the case with any other scraper? Why collecting data if not using it?
Anyway, its your program and your decision ofcourse.
-
-
that's really a bummer. the time and energy you must have invested in making this :(
all metadata and cover art..there has to be a better way to store, manage, combine, distribute it.. i wonder if it's considered public domain, maybe the internet archive would host such a project
-
Maybe im wrong about the traffic. It could be a copyright thing aswell.
I found this the best scraper around cause it works on pi and it can combine data from various sources. And the thing that it saves data local saves traffic if you need to rescrape.
It seems @muldjord has enough and strong reasons to pull the plug. Anyway thanks for this great tool. (Would have been a nice addition for retropie-setup with a small gui like sselphs scraper). :(
-
Stay tuned for news. Skyscraper might (MIGHT!) be online again soon'ish. But in a bit of a cut-down state I am afraid... More info when I get through all of the paperwork.
-
Great news. Regardless of if it comes back, hopefully you can share some details as to what exactly happened that cause you to pull it. At a minimum it would be useful information for the developers of other scrapers like @sselph so that they don't run into the same issues.
-
@jdrassa Over the course of the past few weeks, I've felt like I was walking around a minefield. Sources started contacting me with not so friendly mails to take out support for their sites and I just didn't want to deal with that sort of negativity in a project that's supposed to be fun and helpful. Hence the take-down. I'd like to point out that I completely understand why sites won't allow scraping! It can hit a database hard if overused.
Skyscraper won't be available again until I have some official permission to use each module. And unfortunately that also means that Skyscraper will be back in a very cut-down version... I completely understand this! But it also is pretty demotivating when all I wanted to do was to help people out.
I even implemented the local cache to try and make people reuse the data. But I still can't control how people use it! And I created the local importer so you could get data from your own source text files and image files and so on...
Bottom line: Skyscraper might be back with ONLY the sources I have official permission from. And even then, I need to just trust the users not to overdo the scrapings.
-
That sucks man. I can understand why people want it blocked though, considering the effort put into making the websites and also the bandwidth. I'd imagine "scraping" emumovies would be immediately shut down if all of the movies were being pulled from a source outside of their own site where they can try to entice the user into a paid membership for the higher quality videos.
Like I said, the NES collection I'm working on is looking great, and I'd like to start making videos of my own that conform to a standard length with a title sequence at the end and all have the same volume one day. I'd also like to do this for all of the other major console systems out there.
If there was a way to do this when I've got everything put together that would be easy for users to get access to, I'm open to any suggestions.
I don't know how long it would ever take me to do this though. I've been working odd jobs to make ends meet the last few weeks to buy myself some more time, but this is an insane amount of work and I'm going to need a real job soon if I can't figure out some way of crowd funding the job.
Hopefully I can put out a full NES release that can be distributed to the public by the end of the year though.
-
@muldjord Thanks for sharing. Hopefully you will be able to relaunch it in some form. I can understand concerns about load, but I feel like scraping is the whole purpose of many of these sites.
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.