I've been writing a lot of Python modules lately, and intend to put them up in my code section as soon as I can. But what starts as a self-contained module inevitably splits into multiple smaller modules for modularity, and I don't know how best to distribute them.
For example, I wrote a code generation tool (module cogapp). It had an XML wrapper to make using XML data files easier. That code was split out and posted as handyxml. To test that code, I wrote a small module (called makefiles) to generate trees of files from a dictionary description. The makefiles module used the standard library textwrap.dedent function to make specifying the file contents easier, until I realized that dedent also expands tabs (why?), which I didn't want. My cogapp module has the dedenter I want (because I didn't know about dedent at the time). So I'm going to split out the dedenter as its own module (I'll call it redenter).
But now I've got these modules:
- makefiles, which requires redenter
- handyxml, whose unit test requires makefiles
- cogapp, which requires redenter and optionally uses handyxml, and whose unit test requires makefiles
So here are the questions:
- How should cogapp (for example) be posted? In a tar file with all of the required modules? By itself with a pointer to the other pages?
- Should these modules all be in a package? If this were Java, they'd all be in a package called com.nedbatchelder.
- Should unit tests be included?
- How big does something need to be before I should really use distutils for it?
- Should these things go on PyPI, or sourceforge?