|Ned Batchelder : Blog | Code | Text | Site|
I fixed Python!
» Home : Blog : November 2012
About a month ago, I found a bad-behavior bug in the tokenize standard library module, and with help from Aron Griffis, submitted a patch to fix it. Yesterday was a Python bug day, and Ezio Melotti merged my change, so I have officially contributed to CPython!
The bug in tokenize was an obscure case: if the code ends with a line that starts with non-space, then ends with many spaces, and no newline, then the tokenizer gets into an N² run-time behavior, where N is the number of spaces. The problem is that each space is tokenized as an error token (because it precedes no good token), so N tokens are produced, but each token takes linear time for the regex to see that there's no good token following it, leading to N² behavior.
I discovered this working on code that grades student submissions at edX. For some reason there was a submission ending with 40,000 spaces and no newline, and it was taking 20 minutes to tokenize!
If you are running a server that tokenizes untrusted Python code, you might want to throw an .rstrip() into it to prevent this case...
tagged: python» 1 reaction