Let’s say you are running a conference, and let’s say your Call for Proposals is open, and is saving people’s talk ideas into a spreadsheet.
I am in this situation. Reviewing those proposals is a pain, because there are large paragraphs of text, and spreadsheets are a terrible way to read them. I did the typical software engineer thing: I spent an hour writing a tool to make reading them easier.
The result is csv_review.py. It’s a terminal program that reads a CSV file (the exported proposals). It displays a row at a time on the screen, wrapping text as needed. It has commands for moving around the rows. It collects comments into a second CSV file. That’s it.
There are probably already better ways to do this. Everyone knows that to get answers from the internet, you don’t ask questions, instead you present wrong answers. More people will correct you than will help you. So this tool is my wrong answer to how to review CFP proposals. Correct me!
If you're running the conference, presumably you're in control of the website (though I could imagine that not being the case). Proposals should simply be stored in a database table that you then could step through with ease.
What I think you're actually thinking is, "why not use a DBMS (a database management system, a program that reads, writes and updates databases)?" A DBMS like MySQL or PostgreSQL seems like overkill for this application (though of course if you're already using one for other related data, it's reasonable to take advantage of it for this, too).
In this situation, there is already one piece of a database management system in place: Google Forms, which adds data to a database in a CSV file. CSV is a nice portable format, with lots of easy ways to read, display and write it (Excel and the Python csv module being just two of many) and thus makes it easy to write another part of a DBMS, a reporting application, which is exactly what Ned has done here. And I think it's a great solution. It also follows the Unix spirit of making small tools that work together on common data formats.
For what it's worth, about ten or twelve years ago I wrote a limited RDBMS in Ruby that used TSV files (those being slightly easier to edit in Vim) for table storage and did joins and all the other usual stuff. Given that we were talking truly tiny databases (generally no more than a few thousand rows in a table), performance was just fine. I built a small accounting system on top of this. It was only a few hundred lines of code, development time was little different from trying to build something on PostgreSQL or whatever, and system administration and maintenance were an order of magnitude cheaper. (No servers beyond the Git repo server we already had, backup and a full audit log for free because the TSV files were in Git, and so on.)
Trying to use what some would consider a "proper" system for something that can be done much more simply is one of the biggest time-wasters in software development. It's particularly egregious when the program is used by developers or people who work next to developers; doing a bit of support for these things is far cheaper than trying to write something robust enough to work with less support.
Add a comment: