Python 2/3 and unicode file paths

This bug popped up in a script of mine:

For Python 2:

>>> os.path.abspath('.')
>>> os.path.abspath(u'.')

For Python 3:

>>> os.path.abspath('.')
>>> os.path.abspath(b'.')

That odd set of question marks is a completely useless and invalid path in case you were wondering. The windows cmd prompt sometimes has question marks that aren’t garbage, but I assure you, these are useless and wrong question marks.

The solution is to always use unicode strings with path functions. A bit of a pain. Am I the only one who thinks this is failing silently? I’ll file it in the bug tracker and we’ll see.

import – complex or complicated?

In Python, life is really easy when all your .py files are in one directory. The moment you want to organize your code into folders there’s a wall of challenges you have to climb. I believe this is an issue that can be alleviated with one small fix.

Here’s a comparison of how a developer shares code across a project in C/C++ and Python:

C/C++ Python
Forms #include <from_env_dirs_first>
#include “from_local_dir_first”
#include “abs_or_rel_file_system_path”
import module
import module as alias
from module import var
from module import *
from ..package_relative_path import module
from package.absolute_path import module
    import one_thing
except ImportError:
    import another as one_thing
Namespacing Public toilet – everything included is global. “Namespaces are one honking great idea — let’s do more of those!”
Seriously, module encapsulation is fantastic.
Helpful extra knowledge Makefiles/vcproj configurations of paths
Mandatory extra
knowledge (“Gotchas”)
#pragma once (or the equivalent #ifdef)
certain things aren’t allowed in .h files
please don’t use absolute paths
syntax for intra-package imports
modules intended for use as the main module of a Python application must always use absolute imports.

Now this isn’t an exhaustive list as I want to discuss just a small subset from the above table. Also note that I didn’t go into “ctypes”, “#pragma comment(lib…)”, etc. as we’re talking about sharing code, not binaries.

For the 6 years of keyboard tapping I’ve done in C and Python, I never once was confused as to how to access code between directories in C; Python on the other hand has gotten me quite a few times and I always need to rertfm. And I consider myself far more interested and fluent in Python than in C/C++. This may be just a problem with my head, but I’d like to vent either way.

Blah, blah, what’s the problem?

Skip this section if you’ve already had experience with said problem, I’m sure it’s as painful to read as it was to write.

Python has this really elegant solution for one-folder-mode, “import x” just gives you what you expected, either from the standard library (sys.path, etc) or your local directory. If you have “” in that local directory then you shadow out the standard “import os”. Once you mix directories in there, python is suddenly afraid of shadowing and you can’t import things from a folder named “os” unless it has an “”. So shadowing here is allowed and there not. If you want to access modules from the outside (dot dot and beyond), then you have to be in a package, use sys.path, os.chdir or maybe implement file-system-imports on your own.

Personally, I find myself doing this design pattern a lot:

  1. The App directory
    2. framework
    3. components

I usually have an “if __name__ == ‘__main__’:” in my modules and there I have some sort of test, utility function, or a train of code-thought not yet organized.

How can access First things first – everywhere! After trying a few ways to do the import – here are a few results.

So what’s needed for another_tool to import general_useful_things:

  • “from framework import general_useful_things” works in if we only use, it does not work if we run directly. Does this mean __name__ == “__main__” is a useless feature I should ignore?
  • Here’s the rest of the list of failed attempts:
    #from app.framework import general_useful_things
    #from .app.framework import general_useful_things
    #from ..framework import general_useful_things
    #from .framework import general_useful_things
    #from . import framework
    #from .. import framework
  • And this little recipe works in most cases:
    SRC_DIR = os.path.dirname(os.path.abspath(__file__))
    os.sys.path.append(os.path.join(SRC_DIR, '..', 'framework'))
    import general_useful_things

If you want to tinker around with that example directory structure here you go:

Python doesn’t have file-system imports

To summarize my rant – python has this mantra that your import lines should be concise and thus a complex searching import mechanism was built to avoid filesystem-path-like imports. The price we pay for that searching import mechanism is that you really need to learn how to use its implicit kinks and even then it’s not that fun to use.

The theoretical ideal solution

“import x” always imports from sys.path etc, if you want to import something local you use “import ./local_dir_module”, the forward slash signals the parser and the developer that a file-system import is taking place. “” needs to be in the current folder for the above example to work. Just in case it isn’t clear, the module “local_dir_module” will be accessed as usual, without the “.py”, dots or slashes. The import statement is the only place where slashes are allowed and the result of the import is a module in the stater’s namespace.

That’s as explicit, simple, concise and useful as it gets.

The practical solution

I don’t mind if “import x” still works as it does today, the main point is that now you can do things like “import ../../that_module_from_far_away”. So you can actually keep python 100% backwards compatible and still add this feature.

Concerning the backslash/forwardslash debate – I’m a windows guy and I don’t mind using the forward slash for Python, Windows doesn’t mind it either (“/” only fails in a few specific scenarios like cmd autocomplete). Another fun fact is you can avoid littering your app with if you aren’t going to be accessed using that big old search-import-package mechanism.

I realize this whole fiasco might raise the question of absolute path imports, in my opinion these shouldn’t be allowed. Absolue includes in C/C++ destroy portability, impose annoying folder structure constraints and they’re ever-so tempting at late hours where you don’t really want to calculate the amount of “..” needed. For the special cases that might still need this, the instrumentation existing in python and e.g. import_file are enough.

The good things about

Many packages use as a way to organize their API’s to the outside world. Your package folder can have tons of scripts and only what you included in is exposed when your folder is imported directly (eg json in the std-library). So don’t take this as an attack on, it’s just that the import mechanism seems incomplete in my eyes. Just to be a bit specific – package maintainers don’t need to do stuff like “import os as _os” to avoid littering their module namespace when they use as their API, that’s a nice thing to have.

Also, I’d like to hear other justifications as I’m sure more than a few exist.

The drawbacks of slashes and file-system-imports

  1. From a compatibility viewpoint, old packages aren’t affected as we’re introducing the “forward slash” in whatever future python version. Whoever uses this feature won’t be compatible with older python versions.
  2. Windows users and *nix users might argue over whether or not to allow backslashes, I think it’s not that important. Though the internet has forward slashes, so that makes it 2 platforms against 1.
  3. It’s uglier (though today’s relative imports are just as ugly and harder to learn).
  4. People might ask for absolute imports.
  5. Dividing the community and its packages into “file-system-importers” and “package-search-importers”.
  6. *reserved for complaints in the comments*


I’ve tried to do packages the existing python way and I think we can do better. The based search mechanism works great for whatever is in sys.path, though I believe its pains outweigh its gains for organizing code. Here’s to hoping there’s a chance for relative-file-system imports in standard python.

References – January 07, 1999 – “There are Many Ways to Import a Module” – “The import and from-import statements are a constant cause of serious confusion for newcomers to Python”

The GIL Detector

Ever wonder if your flavor of python has a Global Interpreter Lock? The recipe checks that.

In this day and age CPython is still core-locked; though we are seeing improvements thanks to Antoine Pitrou and other great people of the Python community. Wanting to measure just how bad the problem is – my way of looking at it was through a number I call the “python effective core count”.

Python Effective Core Count

How many cores does python see? If you bought a quad core, how many cores can each python process utilize? Measuring how long it takes to complete a given amount of work, W, then measuring how long it takes for python to run 2W using 2 threads, 3W on 3 threads, etc. The script calculates:

effective_cpus = amount_of_work / (time_to_finish / baseline)

Where the baseline is the time_to_finish for 1 work unit. E.g. if it took the same amount of time to finish 4W (amount_of_work = 4) on 4 threads as it took 1W on 1 thread – python is utilizing 4 cores.


I recommend reading the whole output to see the exact numbers.

Implementation Effective Core Count
Jython 2.5.2 3.8/4 cores
IronPython 2.7 3.2/4 cores
PyPy-1.5 1.0/4 cores
Stackless Python 3.2 1.0/4 cores
CPython 3.2 1.0/4 cores
CPython 2.7 0.2/4 cores

Basically, Jython has the best multithreading with IronPython not too far behind. I know multiprocessing is really easy in python, but it’s still harder. We have to solve this before 8 core CPU’s become the standard at grandma’s house. Those other languages (C/C++) utilize 3.9-4.0 cores of a quad core machine easily, why can’t we? An honorable mention goes to PyPy which by far was the fastest to execute the benchmark (10x faster). PyPy is definitely the future of Python, hopefully they can save us all. A note about CPython 2.7 – yes that number is under 1 because adding threads on python-cpu-intensive tasks hurt performance badly on older versions of the GIL, Antoine fixed that on CPython 3.2.

In my opinion the community should call this a bug and have a unittest in there yelling at us until we fix it, though it’s easy for me to complain when I’m not the core-dev. Maybe I can join PyPy and see what I can do when I find some free time. Hopefully so.

Edit – updated as corrected at reddit.

A new module – import_file

So I had this python web server and I wanted to write a script that does some maintenance. The problem was, if the maintenance script isn’t part of the “” package tree, it couldn’t use any of the web server modules. A hacky way to get around this is to add the target module’s directory to the path:

    import sys
    import your_module

Another trick is using os.chdir. Even when importing modules from the same package things can become confusing as can be learned from PEP 328, PEP 366, an abundance of stackoverflow questions on the subject and many more. I’d like to quote The Zen of Python:

    Simple is better than complex.
    There should be one-- and preferably only one --obvious way to do it.
    If the implementation is hard to explain, it's a bad idea.

I don’t believe any of these can be said about python imports, at least not for anything past the trivial case of one-folder-with-all-the-modules. The moment you want to organize your project in folders it becomes complex if not complicated and unnatural.

“import math” just works and that’s great, I just wished there was an equivalent to the banal:

#include "path/to/module"

From those inferior languages. So I wrote import_file which can be used like this:

    >>>from import_file import import_file
    >>>mylib = import_file('c:\\')
    >>>another = import_file('relative_subdir/')

It’s very similar to the imp module syntax, except the function requires one argument less. This is the way it should be imo. Enjoy import_file at google code, from the cheese shop, “easy_install import_file” or pip, etc.

Scrolla – A Javascript Scroller

Here it is – Scrolla for Queen’s Bohemian Rhapsody.

This was a fairly straightforward project. Reddit’s graulund made a beautiful comic depicting one of my favorite songs – Queen – Bohemian Rhapsody. Because I believe in lazy people I made this auto-scrolling javascript. The youtube videos doing the scrolling just weren’t awesome enough. jQuery’s “animate” is used to scroll, swfobject to embed a chromeless youtube player that’s pushed out of the frame. Please do “view source”.

I tried to make it easy to reuse “scrolla” so all the important variables are at the top of the html file (which video, which picture and at what rhythm to scroll). The “scrolla” thing could actually be a cute web app where people submit their scrolling pictures with youtube background sounds or music, but I didn’t have the diligence for that just yet. If anyone tries to make another scrolla I’d suggest to use the buggy but useful feature I added – suffixing the url variable “?at=30” will start the scrolling and youtube video at 30 seconds. When editing a 5 minute long song by Queen, I found this feature absolutely necessary. Without it I would’ve been in replay hell for big piles of minutes.

Pythonistas use chrome

I was very happy to see the wall of shame got so much attention. To my surprise, an extraordinary amount of visitors use the google browser as can be seen by the analytics data. Out of 24,000 visitors:

48.1% – Chrome
31.61% – Firefox
10.9% – Safari
3.95% – Mozilla compatible (what is this?)
2.35% – Internet Explorer
2.19% – Opera

Most of the visitors were unique (you guys aren’t coming back?) and it is a pretty technical, python programming related topic. So I can assume most of these are "pythonistas". Imo just like how porn decides who wins the media battles, programmers are the ones who decide who wins the browser battles. You know their grandparents/aunts/kids/uncles/etc are all going to ask them which browser to use. So expect a very bright future for google chrome.

The decline of google search

So I revealed the python 3 wall of shame a few days ago and immediately it was the best result in google when I searched for “python wall of shame” and “python3wos”. Seems to make sense as there was no wall of shame for python yet. Now lets look at results 3 days later:

Search results for "Python 3 Wall of Shame"

Search results for "Python 3 Wall of Shame"

You can see that on the first page of google results you can find a million references to the wall, but no links to the actual website or blog post. The same can be said about results for “python3wos” which isn’t even a word but it’s the subdomain of the site.

Search results for "python3wos"

Search results for "python3wos"

It’s not too bad since in most of these results the real website is just a click away. I just really don’t understand what happened, maybe I did something wrong?. Anyhow, it’s pretty obvious that all the kids are only using facebook to communicate nowadays. Soon they’ll just feel comfy with facebook’s bing search, a few years of that and the game will be over. Here’s to hoping google sharpen their edge.

edit – a few hours after writing this article, the search “python 3 wall of shame” is now fixed, though “python3wos” still gets you only to links. Lets do an seo experiment, I’ll add this link python3wos here and add the word “python3wos” on the wall.

The Python 3 Wall of Shame

Here’s my attempt at motivating package maintainers to port to python 3 and you can check out the code that generates the chart. I basically scraped PyPI (13,000 webpages) for this info and formatted it into an HTML table which is uploaded to appspot. I added a ‘cron’ task on my PC to recrawl PyPI every Sunday so the chart stays fresh. When developing the WOS I used filecache (from the previous blog post) so I could just write the code that parses the crawl as though it just now scraped PyPI while infact everything was cached to disk. Without filecache I would have had to either wait for ages, or write code that stores the scrapes, and reparses them. About 10 lines turn into an import and a decorator, now that’s magic.

I hope no one takes offense from this. The situation we’re starting at is pretty bad. Only 11 out of the top 100 packages on PyPI are labeled as supporting python3. There are a few glitches (e.g. multiprocessing) where the developers simply haven’t yet labeled the package as python 3 compliant.

Once we pass the 50% mark I guess we can change it from “Wall of Shame” to “World of Strength” or something because the subdomain is “wos”.


Working on a project that needed some web scraping, I made filecache. I just wanted a simple decorator that would save the results of a function to disk. Memcache and other recipes seemed to be either overly complex or just plain memory caching (without persistence). If you want to help improve it then check out the google code filecache project.

Life Recording

If a baby born today was given a necklace with 2 mics and a camera, can we record its entire lifetime? At what quality? Lets assume dvdrip quality with xvid4, excellent compression with great sound and visuals for 700 MB per 2 hours. If we assume an 80 year life expectancy then we have 80 * 365 * 24 = 700,800 hours in a lifetime. This amounts to about 240 terabytes. Let’s say we cut off the boring half of the day and take a lower quality recording (1/5 file size as in YouTube for example) and we’re down to 24 terabytes. Using smart VBR we might even be able to get some hi-def for the important parts.

So according to hard drive capacity trends, this is feasible by 2015 (hard drive capacity multiplies by 10 every 5 years, see at: ).

The only problem nowadays is the battery. Current tech would require your life recorder to recharge or replace the batteries once a day. So by 2015 all the crazy modern technologies will be ready for recording an entire human lifespan from a first person point of view without replacing any piece of equipment, except for the energy source. By the way the modern battery was invented around 1800. Us humans had 200 years to work on this thing but we’re still terrible at it.

Ok, so how do we tap into our bodies energy supplies? Please answer this question by 2015, thank you.