Here it is – Scrolla for Queen’s Bohemian Rhapsody.
I tried to make it easy to reuse “scrolla” so all the important variables are at the top of the html file (which video, which picture and at what rhythm to scroll). The “scrolla” thing could actually be a cute web app where people submit their scrolling pictures with youtube background sounds or music, but I didn’t have the diligence for that just yet. If anyone tries to make another scrolla I’d suggest to use the buggy but useful feature I added – suffixing the url variable “?at=30” will start the scrolling and youtube video at 30 seconds. When editing a 5 minute long song by Queen, I found this feature absolutely necessary. Without it I would’ve been in replay hell for big piles of minutes.
I was very happy to see the wall of shame got so much attention. To my surprise, an extraordinary amount of visitors use the google browser as can be seen by the analytics data. Out of 24,000 visitors:
48.1% – Chrome
31.61% – Firefox
10.9% – Safari
3.95% – Mozilla compatible (what is this?)
2.35% – Internet Explorer
2.19% – Opera
Most of the visitors were unique (you guys aren’t coming back?) and it is a pretty technical, python programming related topic. So I can assume most of these are "pythonistas". Imo just like how porn decides who wins the media battles, programmers are the ones who decide who wins the browser battles. You know their grandparents/aunts/kids/uncles/etc are all going to ask them which browser to use. So expect a very bright future for google chrome.
Here’s my attempt at motivating package maintainers to port to python 3 and you can check out the code that generates the chart. I basically scraped PyPI (13,000 webpages) for this info and formatted it into an HTML table which is uploaded to appspot. I added a ‘cron’ task on my PC to recrawl PyPI every Sunday so the chart stays fresh. When developing the WOS I used filecache (from the previous blog post) so I could just write the code that parses the crawl as though it just now scraped PyPI while infact everything was cached to disk. Without filecache I would have had to either wait for ages, or write code that stores the scrapes, and reparses them. About 10 lines turn into an import and a decorator, now that’s magic.
I hope no one takes offense from this. The situation we’re starting at is pretty bad. Only 11 out of the top 100 packages on PyPI are labeled as supporting python3. There are a few glitches (e.g. multiprocessing) where the developers simply haven’t yet labeled the package as python 3 compliant.
Once we pass the 50% mark I guess we can change it from “Wall of Shame” to “World of Strength” or something because the subdomain is “wos”.
Working on a project that needed some web scraping, I made filecache. I just wanted a simple decorator that would save the results of a function to disk. Memcache and other recipes seemed to be either overly complex or just plain memory caching (without persistence). If you want to help improve it then check out the google code filecache project.
A week ago I found out some strange stuff was going on when I searched for “aaaaaaaaaaa”. So I made a python script
http://pastebin.com/f35806f8f for gathering the search results and wanted to plot a nice graph. The problem was I couldn’t find a nice place to show and store the data. So I made a pastebin esque graphbin at http://graphbin.appspot.com and you can see the google search results:
I used “Paste-It” (http://paste-it.appspot.com/) as the base for graphbin so I got a bit familiar with the “Paste”
(http://pythonpaste.org) framework. Sadly, this wasn’t a pleasant journey. A lot of the connected things are just scattered all over the place and yet seem heavily entangled (ie main.py uses
page/pasties/add.py which outputs the relevant template page in line 125 out of 342 after a flow that was no fun to follow). I’m not sure if this is a problem with “Paste” or a problem with the “Paste-It” implementation, but it got a bit ugly. Not to mention some bad naming (the “paste” folder holds the “web” framework folder and all the models, why not name the folder “models” and move the framework out? Why are pages “pasties”?).
Maybe some paste-lover can dazzle me with some info on how paste compares/differs to/from django. Drop a comment.
I’ve got too many things on my mind, I was thinking of doing the following:
- Help improve 3to2, sounds like it could help bridge the PyPI gap and allow for writing “backwards compatible” libraries, using the newest of technologies (it’s alot better than imitating decorator behavior using the silly syntax it’s sugar for).
- Help port Numpy to py3k
- Help port Django to py3k
- Build a small script that’ll crawl svn.python.org and give points to developers. [update: ohloh did it]
- Build a python ascii art lib (not one that does the greyscale trick, one that actually tests for lines etc).
- Build a pinax captcha app
- Make a py2exe, Freeze, py2app mashup that can easily compile on all platforms.
I think that’s it. I wonder how much of the above I’ll be able to complete by next month considering I’m really not good enough in Rock Band 2 drums yet.
These have to be some of the most interesting things I’ve seen recently:
- The Computer Language Benchmarks Game – compare how cpu, memory footprint and source size of all your favorite languages. Also tested is multi-core vs single core and 32 vs 64 bit.
- Meta analysis of the above game – visualizes all of the data from the above benchmark. No ground breaking conclusions, but the graphs are very nice to look at.
It’s really interesting to see this information so plainly so one could learn a few ball park performance coefficients. There are a few things that the benchmarks don’t measure ie readability, richness of library, strength and size of the developer community, etc, but it’s interesting none the less.
This might be a bit silly, but I’m pretty proud of my first diff in the python trunk. This is the ticket where all the magic happened.
I think I’m content. No need to be working too hard getting pyopt into the standard library, etc…