Coverage.py and third-party code

Monday 12 April 2021

I’ve made a change to coverage.py, and I could use your help testing it before it’s released to the world.

tl;dr: install this and let me know if you don’t like the results:
pip install coverage==5.6b1

What’s changed? Previously, coverage.py didn’t understand about third-party code you had installed. With no options specified, it would measure and report on that code, for example in site-packages. A common solution was to use --source=. to only measure code in the current directory tree. But many people put their virtualenv in the current directory, so third-party code installed into the virtualenv would still get reported.

Now, coverage.py understands where third-party code gets installed, and won’t measure code it finds there. This should produce more useful results with less work on your part.

This was a bit tricky because the --source option can also specify an importable name instead of a directory, and it had to still measure that code even if it was installed where third-party code goes.

As of now, there is no way to change this new behavior. Third-party code is never measured.

This is kind of a big change, and there could easily be unusual arrangements that aren’t handled properly. I would like to find out about those before an official release. Try the new version and let me know what you find out:

pip install coverage==5.6b1

In particular, I would like to know if any of the code you wanted measured wasn’t measured, or if there is code being measured that “obviously” shouldn’t be. Testing on Debian (or a derivative like Ubuntu) would be helpful; I know they have different installation schemes.

If you see a problem, write up an issue. Thanks for helping.

Gefilte Fish: gmail filter creation

Sunday 28 March 2021

At work, to keep up with mailing lists and GitHub notifications, I had more than fifty GMail filters. It wasn’t too bad to create them by hand with the GMail UI, but I’m sure there were filters there I didn’t need any more.

But then I wanted a filter with both an if-action, and an else-action. Worse, I wanted if-A, then do this, if-B, do this, else, do that. GMail filters just aren’t constructed that way. It was going to be a pain to set them up and maintain them.

Looking around for tools, I found gmail-britta, a Ruby DSL. This was the right kind of tool for me, except I don’t write Ruby. I hadn’t found gmail-yaml-filters, but I don’t think I want to write YAML.

gmail-tools looked promising, but my work GMail account wouldn’t let me follow its authentication steps. Honestly, I often run afoul of authentication when trying to use APIs. (See Support windows bar calendar for another project I built in a strange way specifically to avoid having to figure out authentication.)

So naturally, I built my own module to do it: Gefilte Fish is a Python DSL (domain-specific language) of sorts to create GMail filters. (The name is fitting since this is the start of Passover.) Using gefilte, you write Python code to express your filters. Running your program outputs XML that you then import into GMail to create the filters.

The DSL lets you write this to make filters:

from gefilte import GefilteFish

# Make the filter-maker and use its DSL. All of the methods of GitHubFilter
# are now usable as global functions.
fish = GefilteFish()
with fish.dsl():

    # Google's spam moderation messages should never get sent to spam.
    with replyto("noreply-spamdigest@google.com"):
        never_spam()
        mark_important()

    # If the subject and body have these, label it "liked".
    with subject(exact("[Confluence]")).has(exact("liked this page")):
        label("liked")

    with from_("notifications@github.com"):
        # Skip the inbox (archive them).
        skip_inbox().label("github")

        # Delete annoying bot messages.
        with from_("renovate[bot]"):
            delete()

        # GitHub sends to synthetic addresses to provide information.
        with to("author@noreply.github.com"):
            label("mine").star()

        # Notifications from some repos are special.
        with repo("myproject/tasks") as f:
            label("todo")
            with f.elif_(repo("otherproject/something")) as f:
                label("otherproject")
                with f.else_():
                    # But everything else goes into "Code reviews".
                    label("Code reviews")

    # Some inbound addresses come to me, mark them so
    # I understand what I'm # looking at in my inbox.
    for toaddr, the_label in [
        ("info@mycompany.com", "info@"),
        ("security@mycompany.com", "security@"),
        ("con2020@mycompany.com", "con20"),
        ("con2021@mycompany.com", "con21"),
    ]:
        with to(toaddr):
            label(the_label)

print(fish.xml())

To make the DSL flow somewhat naturally, I definitely bent the rules on what is considered good Python. But it let me write succinct descriptions of the filters I want, while still having the power of a programming language.

Machete mode: tagging frames

Saturday 20 March 2021

I had a puzzle about Python execution today, and used a machete-mode debugging trick to figure it out. If you haven’t heard me use the term before, machete mode is when you use gross temporary code to get information any way you can.

Here’s what happened: I added a new parameterized test to the coverage.py test suite. It was really slow, so I ran it with timings displayed:

$ tox -qe py39 -- -n 0 -k virtualenv --durations=0 -vvv
... output omitted ...
========================= slowest test durations ==========================
7.14s call     tests/test_process.py::VirtualenvTest::test_making_virtualenv[False]
6.23s call     tests/test_process.py::VirtualenvTest::test_making_virtualenv[True]
0.47s setup    tests/test_process.py::VirtualenvTest::test_making_virtualenv[False]
0.01s setup    tests/test_process.py::VirtualenvTest::test_making_virtualenv[True]
0.01s setup    tests/test_process.py::VirtualenvTest::test_making_virtualenv[False]
0.01s setup    tests/test_process.py::VirtualenvTest::test_making_virtualenv[True]
0.00s teardown tests/test_process.py::VirtualenvTest::test_making_virtualenv[True]
0.00s teardown tests/test_process.py::VirtualenvTest::test_making_virtualenv[True]
0.00s teardown tests/test_process.py::VirtualenvTest::test_making_virtualenv[False]
0.00s teardown tests/test_process.py::VirtualenvTest::test_making_virtualenv[False]
========================= short test summary info =========================
FAILED tests/test_process.py::VirtualenvTest::test_making_virtualenv[False] ...
FAILED tests/test_process.py::VirtualenvTest::test_making_virtualenv[True] ...

Huh, that’s weird: two tests (“call”), but four invocations of my test setup function, and four to the teardown. I’ve only just recently converted this test suite over from a unittest.TestCase foundation, and I have some odd shims in place to reduce the code churn. Thinking about the double-setup, I thought either my shims were wrong, or I was in some strange edge case in how pytest runs tests.

But how to figure out why the setup is called twice for each test run? I decided to use a tool I’ve reached for often in the past: capture the stack information and record it someplace:

def setup_test(self):
    import inspect
    project_home = "/Users/ned/coverage"
    site_packages = ".tox/py39/lib/python3.9/site-packages/"
    with open("/tmp/foo.txt", "a") as foo:
        print("setup_test", file=foo)
        for t in inspect.stack()[::-1]:
            # t is (frame, filename, lineno, function, code_context, index)
            frame, filename, lineno, function = t[:4]
            filename = os.path.relpath(filename, project_home)
            filename = filename.replace(site_packages, "")
            show = "%30s : %s:%d" % (function, filename, lineno)
            print(show, file=foo)

This is my test setup function which is being called too often. I used a low-tech logging technique: append to a temporary file. For each frame in the call stack, I write the function name and where it’s defined. The file paths are long and repetitive, so I make them relative to the project home, and also get rid of the site-packages path I’ll be using.

This works, it gave me four stack traces, one for each setup call. But all four were identical:

setup_test
                      <module> : igor.py:424
                          main : igor.py:416
           do_test_with_tracer : igor.py:216
                     run_tests : igor.py:133
                          main : _pytest/config/__init__.py:84
                      __call__ : pluggy/hooks.py:286
                     _hookexec : pluggy/manager.py:93
                      <lambda> : pluggy/manager.py:84
                    _multicall : pluggy/callers.py:187
           pytest_cmdline_main : _pytest/main.py:243
                  wrap_session : _pytest/main.py:206
                         _main : _pytest/main.py:250
                      __call__ : pluggy/hooks.py:286
                     _hookexec : pluggy/manager.py:93
                      <lambda> : pluggy/manager.py:84
                    _multicall : pluggy/callers.py:187
            pytest_runtestloop : _pytest/main.py:271
                      __call__ : pluggy/hooks.py:286
                     _hookexec : pluggy/manager.py:93
                      <lambda> : pluggy/manager.py:84
                    _multicall : pluggy/callers.py:187
       pytest_runtest_protocol : flaky/flaky_pytest_plugin.py:94
       pytest_runtest_protocol : _pytest/runner.py:78
               runtestprotocol : _pytest/runner.py:87
               call_and_report : flaky/flaky_pytest_plugin.py:138
             call_runtest_hook : _pytest/runner.py:197
                     from_call : _pytest/runner.py:226
                      <lambda> : _pytest/runner.py:198
                      __call__ : pluggy/hooks.py:286
                     _hookexec : pluggy/manager.py:93
                      <lambda> : pluggy/manager.py:84
                    _multicall : pluggy/callers.py:187
          pytest_runtest_setup : _pytest/runner.py:116
                       prepare : _pytest/runner.py:362
                         setup : _pytest/python.py:1468
                  fillfixtures : _pytest/fixtures.py:296
                 _fillfixtures : _pytest/fixtures.py:469
               getfixturevalue : _pytest/fixtures.py:479
        _get_active_fixturedef : _pytest/fixtures.py:502
        _compute_fixture_value : _pytest/fixtures.py:587
                       execute : _pytest/fixtures.py:894
                      __call__ : pluggy/hooks.py:286
                     _hookexec : pluggy/manager.py:93
                      <lambda> : pluggy/manager.py:84
                    _multicall : pluggy/callers.py:187
          pytest_fixture_setup : _pytest/fixtures.py:936
             call_fixture_func : _pytest/fixtures.py:795
             connect_to_pytest : tests/mixins.py:33
                    setup_test : tests/test_process.py:1651

I had hoped that perhaps the first and second calls would have slightly different stack traces, and the difference would point me to the reasons for the multiple calls. Since the stacks were the same, there must be loops involved somewhere. How to find where in the stack they were?

If I were familiar with the code in question, reading one stack trace might point me to the right place. But pytest is opaque to me, and I didn’t want to start digging in. I’ve got a few different pytest features at play here, so it seemed like it was going to be difficult.

The stack traces were the same, because they only show the static aspects of the calls: who calls who, from where. But the stacks differ in the specific instances of the calls to the functions. The very top frame is the same (there’s only one execution of the main program), and the very bottom frame is different (there are four executions of my test setup function). If we find the highest frame that differs between two stacks, then we know which the loop is calling the setup function twice.

My first thought was to show the id of the frame objects, but ids get reused as objects are reused from free-lists. Instead, why not just tag them explicitly? Every frame has its own set of local variables, stored in a dictionary attached to the frame. I write an integer into each frame, which is the number of times we’ve seen that frame.

Now the loop over frames also checks the locals of each frame. If our visit count isn’t there, initialize it to zero, and if it is there, increment it. The visit count is added to the stack display, and we’re good to go:

def setup_test(self):
    import inspect
    project_home = "/Users/ned/coverage"
    site_packages = ".tox/py39/lib/python3.9/site-packages/"
    with open("/tmp/foo.txt", "a") as foo:
        print("setup_test", file=foo)
        for t in inspect.stack()[::-1]:
            # t is (frame, filename, lineno, function, code_context, index)
            frame, filename, lineno, function = t[:4]
            visits = frame.f_locals.get("$visits", 0)       ## new
            frame.f_locals["$visits"] = visits + 1          ## new
            filename = os.path.relpath(filename, project_home)
            filename = filename.replace(site_packages, "")
            show = "%30s :  %d  %s:%d" % (function, visits, filename, lineno)
            print(show, file=foo)

Now the stacks are still the same, except the visit counts differ. Here’s the stack from the second call to the test setup:

setup_test
                      <module> :  1  igor.py:424
                          main :  1  igor.py:416
           do_test_with_tracer :  1  igor.py:216
                     run_tests :  1  igor.py:133
                          main :  1  _pytest/config/__init__.py:84
                      __call__ :  1  pluggy/hooks.py:286
                     _hookexec :  1  pluggy/manager.py:93
                      <lambda> :  1  pluggy/manager.py:84
                    _multicall :  1  pluggy/callers.py:187
           pytest_cmdline_main :  1  _pytest/main.py:243
                  wrap_session :  1  _pytest/main.py:206
                         _main :  1  _pytest/main.py:250
                      __call__ :  1  pluggy/hooks.py:286
                     _hookexec :  1  pluggy/manager.py:93
                      <lambda> :  1  pluggy/manager.py:84
                    _multicall :  1  pluggy/callers.py:187
            pytest_runtestloop :  1  _pytest/main.py:271
                      __call__ :  1  pluggy/hooks.py:286
                     _hookexec :  1  pluggy/manager.py:93
                      <lambda> :  1  pluggy/manager.py:84
                    _multicall :  1  pluggy/callers.py:187
       pytest_runtest_protocol :  1  flaky/flaky_pytest_plugin.py:94
       pytest_runtest_protocol :  0  _pytest/runner.py:78
               runtestprotocol :  0  _pytest/runner.py:87
               call_and_report :  0  flaky/flaky_pytest_plugin.py:138
             call_runtest_hook :  0  _pytest/runner.py:197
                     from_call :  0  _pytest/runner.py:226
                      <lambda> :  0  _pytest/runner.py:198
                      __call__ :  0  pluggy/hooks.py:286
                     _hookexec :  0  pluggy/manager.py:93
                      <lambda> :  0  pluggy/manager.py:84
                    _multicall :  0  pluggy/callers.py:187
          pytest_runtest_setup :  0  _pytest/runner.py:116
                       prepare :  0  _pytest/runner.py:362
                         setup :  0  _pytest/python.py:1468
                  fillfixtures :  0  _pytest/fixtures.py:296
                 _fillfixtures :  0  _pytest/fixtures.py:469
               getfixturevalue :  0  _pytest/fixtures.py:479
        _get_active_fixturedef :  0  _pytest/fixtures.py:502
        _compute_fixture_value :  0  _pytest/fixtures.py:587
                       execute :  0  _pytest/fixtures.py:894
                      __call__ :  0  pluggy/hooks.py:286
                     _hookexec :  0  pluggy/manager.py:93
                      <lambda> :  0  pluggy/manager.py:84
                    _multicall :  0  pluggy/callers.py:187
          pytest_fixture_setup :  0  _pytest/fixtures.py:936
             call_fixture_func :  0  _pytest/fixtures.py:795
             connect_to_pytest :  0  tests/mixins.py:33
                    setup_test :  0  tests/test_process.py:1651

The 1’s are frames that were the same from the first call to the second, and the 0’s are new frames. We can clearly see that flaky_pytest_plugin.py has the loop that calls the setup a second time.

Typical: once you know the answer, it’s obvious! I use the pytest-flaky plugin to automatically retry tests that fail. My new slow test isn’t just slow, it’s also a failing test (for now). So pytest-flaky is running it again.

The real mystery isn’t why the setup is called twice, but why the actual run of the test is only reported once. I checked: it’s not just the setup that runs twice, the body of the test is also running twice.

When I made the test pass, the double execution went away, because pytest-flaky wasn’t re-running the failed test.

This is a classic machete-mode debugging story: the problem was easier to dissect with dynamic tools rather than static; I hacked in some gross code to get me the information I needed; I didn’t know if it would work well, but it did.

BTW, it seems a bit presumptuous to promote this column of numbers to a “visualization,” but it is a good way to see the looping nature of the test runner. Here’s the fourth call stack:

setup_test
                      <module> :  3  igor.py:424
                          main :  3  igor.py:416
           do_test_with_tracer :  3  igor.py:216
                     run_tests :  3  igor.py:133
                          main :  3  _pytest/config/__init__.py:84
                      __call__ :  3  pluggy/hooks.py:286
                     _hookexec :  3  pluggy/manager.py:93
                      <lambda> :  3  pluggy/manager.py:84
                    _multicall :  3  pluggy/callers.py:187
           pytest_cmdline_main :  3  _pytest/main.py:243
                  wrap_session :  3  _pytest/main.py:206
                         _main :  3  _pytest/main.py:250
                      __call__ :  3  pluggy/hooks.py:286
                     _hookexec :  3  pluggy/manager.py:93
                      <lambda> :  3  pluggy/manager.py:84
                    _multicall :  3  pluggy/callers.py:187
            pytest_runtestloop :  3  _pytest/main.py:271
                      __call__ :  1  pluggy/hooks.py:286
                     _hookexec :  1  pluggy/manager.py:93
                      <lambda> :  1  pluggy/manager.py:84
                    _multicall :  1  pluggy/callers.py:187
       pytest_runtest_protocol :  1  flaky/flaky_pytest_plugin.py:94
       pytest_runtest_protocol :  0  _pytest/runner.py:78
               runtestprotocol :  0  _pytest/runner.py:87
               call_and_report :  0  flaky/flaky_pytest_plugin.py:138
             call_runtest_hook :  0  _pytest/runner.py:197
                     from_call :  0  _pytest/runner.py:226
                      <lambda> :  0  _pytest/runner.py:198
                      __call__ :  0  pluggy/hooks.py:286
                     _hookexec :  0  pluggy/manager.py:93
                      <lambda> :  0  pluggy/manager.py:84
                    _multicall :  0  pluggy/callers.py:187
          pytest_runtest_setup :  0  _pytest/runner.py:116
                       prepare :  0  _pytest/runner.py:362
                         setup :  0  _pytest/python.py:1468
                  fillfixtures :  0  _pytest/fixtures.py:296
                 _fillfixtures :  0  _pytest/fixtures.py:469
               getfixturevalue :  0  _pytest/fixtures.py:479
        _get_active_fixturedef :  0  _pytest/fixtures.py:502
        _compute_fixture_value :  0  _pytest/fixtures.py:587
                       execute :  0  _pytest/fixtures.py:894
                      __call__ :  0  pluggy/hooks.py:286
                     _hookexec :  0  pluggy/manager.py:93
                      <lambda> :  0  pluggy/manager.py:84
                    _multicall :  0  pluggy/callers.py:187
          pytest_fixture_setup :  0  _pytest/fixtures.py:936
             call_fixture_func :  0  _pytest/fixtures.py:795
             connect_to_pytest :  0  tests/mixins.py:33
                    setup_test :  0  tests/test_process.py:1651

The 3’s change to 1’s at _pytest/main.py:271, which is the loop over the tests to run. Cool :)

Beginners in a sea of experts

Thursday 18 March 2021

How do you make a space that is good for beginners when there are too many experts who also want to help?

I help organize Boston Python. It’s a great group. We’ve been active during the pandemic, in fact, we’ve added new kinds of events during this time.

One of the things we’re trying to get started is a Study Group based on the observation that teaching is a great way to learn. The idea is to form a small but dedicated group of beginner-to-intermediate learners. They would take turns tackling a topic and presenting it informally to the group.

Here’s the problem: how do you make a space that feels right for beginners when you have thousands of experts in the group who also want to join in?

Beginners:

  • Beginners can be shy and uncertain, both about the topic and about whether this space is even for them.
  • They don’t want to appear dumb. They are afraid they will look foolish, or will be ridiculed.
  • They don’t know that everyone has uncertainties and gaps in their knowledge. They don’t know that not knowing something is inevitable, and can be conquered.

Experts:

  • Experts want to help. They have knowledge and want to share it.
  • Can forget how hard it is to be a beginner.
  • Experts can be blind to how their speaking is keeping other people quiet. There’s limited space for talking, but more importantly, expert-level speaking can set the tone that you must be expert-level to speak.

Experts are very good at occupying these spaces. They are comfortable speaking, and eager to share their knowledge. How do we ensure that they don’t monopolize the discussion?

Beginners can be shy, and reluctant to speak. They may feel like they don’t know enough to even ask a question. They don’t want to appear dumb. They hear the experts around them, and feel even more certain that this is not for them.

The experts could have the best intentions: they want to help the beginners. They are interested in the subject, and have useful bits of information to contribute.

I’m looking for ideas to solve these problems!

How to keep the balance of attendees right:

  • Explicitly label the event as “for beginning to intermediate learners.”
  • Send a reminder email about the event, asking people to select themselves out of the event: “We’re really excited that this idea has gotten so much interest. Our goal was to have a smaller conversational group for beginning learners. If that doesn’t sound like you, now is a good opportunity to step back to make space for others.”
  • Have other events labelled for experts so they have a space of their own.
  • Invite specific people one-on-one to increase the number of “right” people.
  • more ideas?

How to encourage beginners to join the group:

  • Use lots of words to underscore the welcoming nature of the group, and that beginners are welcome.
  • Invite specific people one-on-one so they feel sure it’s for them, and that they are wanted.
  • more ideas?

How to encourage beginners to speak in the group:

  • Ice-breaker question
  • Set an expectation that everyone will ask at least one question.
  • Be especially supportive when someone asks a really basic question.
  • Contact them individually to ask if they have anything they want to ask, and help them get it asked.
  • more ideas?

How to encourage beginners to lead a session:

  • Demonstrate vulnerability while leading.
  • Offer to pair with them instead of having them do it alone.
  • more ideas?

Like I said, I’m looking for ideas. The more I run events, the more interested I am in helping beginners get started.

Mapping walks

Monday 15 February 2021

As I mentioned last week in Pandemic walks, part of the fun of the long walks I’ve taken with Nat is mapping them out. The tooling is a hodge-podge of discovered things along the way, but it works for me.

Planning

Each walk is kept in a GPX file. I can get a picture of where we’ve been on previous walks by looking at them as a collection. Dérive is a simple elegant site that will map a set of GPX files dropped onto it. It’s open-source, and they even implemented a feature request I made.

To plan the walks, I originally used gmap-pedometer.com, but now I use On The Go Map. Both will create a route between two points automatically. On The Go has a big clean layout, and I can tweak the plan by dragging new points in the middle of the route as needed. Gmap-pedometer has more options for map sources, but I found On The Go was better at knowing where I could walk, and making correct routes automatically.

It helps to have a good planning tool because I want to get the right distance, not too short and not too long. I also want to include new streets we haven’t visited before.

I use Google Street View to take a look at spots I’m uncertain about. Is that a street or driveway? Can you get from the end of that street into the park next to it?

Walking

Before heading out, I print the map, on paper! It’s easier than fiddling with the phone, and I can draw on it if we go off-plan or if I want to make a note of something we saw along the way.

I use my phone to figure out where I am when I am uncertain, and I have a link to a large map of all of our previous walks if I want to consider an ad-hoc addition.

There might be apps that can track my walk automatically. I’ve used some in the past that captured an approximation, so I would rather map them myself.

Recording

Back home after the walk, I can use the route from On The Go Map, or re-plot it if needed. On The Go gives me the GPX file to add to my collection, and gives me the distance walked for my stats spreadsheet.

When I want to know more about the history of the place we’ve been, I use Mapjunction. It’s a great dual-view of two maps at once, of your choice. For example, you can look at the current streets and the same region 100 years ago to understand how things have changed.

To produce the animated GIF in the last post, I cobbled together a program using a bunch of tools I didn’t fully understand! The result is gpxmapper. It uses Fiona to read the GPX files, Shapely to compute and plot the geometries, and Cartopy to draw the maps. This was definitely a copy-paste patchwork, so don’t take it as the work of an expert. It works, but I can’t promise it does it the best way.

I got some inspiration and headstart from a recent Boston Python presentation: On Python and Positioning: An Introduction to Working with Geospatial Data in Python with GeoPandas by Heather Kusmierz.

The program writes out a pile of PNG files, then uses ImageMagick and Gifsicle to wrangle them into a good animated GIF. A large static version of the total walks is posted online for me to refer to in the field if needed.

•    •    •

Some of this might surprise you as low-tech. For a software engineer, I tend toward low-tech. And as I mentioned in the last post, this whole walking endeavor has given me a much deeper understanding of the neighborhoods around me. Working with the maps to plan and record the walks is part of that process, so I’m not looking to make it more automated.

That said, if you have suggestions, I’m interested to learn!

Pandemic walks

Sunday 7 February 2021

Since last March, we’ve had my son Nat living with us, and the three of us have been doing what everyone is doing: staying to ourselves, working from home, and a lot of Zoom.

Nat and I have also been doing a lot of what we’ve always done together: walking. We both need exercise and a change of perspective, and Susan needs some time without expectations.

So we walk most mornings, whenever the weather and our schedules allow. Our distance has gradually increased, up to about six miles each day.

We’ve been walking since the pandemic started in March, but I only thought to start tracking and mapping the walks at the end of April. To keep it interesting, I’ve planned routes to take us down new streets every day, with the goal of visiting “every” street, whatever that means.

Here are our 191 walks totalling 1025 miles (so far!):

Animation of a map showing every walk we've taken

Part of the fun of this has been learning more about mapping tools, but I’ll save those details for another blog post. (Now at Mapping walks.)

I’ve really liked seeing places close to home that I have never been to, neighborhoods within walking distance I never had a reason to visit. I’m lucky to live in a varied area: to the west are some of the most expensive properties in Massachusetts, to the east are true urban areas, north-east is an upscale shopping district, south is expansive park land.

Because this is Boston, there’s plenty of history. Here are a few tidbits I discovered along the way:

While walking I listen to lots of podcasts, since Nat is not big on talking. I try to keep alert for interesting sights, even if they are not History. Some of those go on my Instagram.

I watched The World Before Your Feet, a documentary about Matt Green walking all 8000 miles of the streets of New York City. Parts of his effort were familiar to me, but there were differences. Of course, the scale of the undertaking. But also, my walks always begin and end at our house, so there is a lot of repeated ground. Nat wants to walk fast, and we have Zoom calls to get back to, so leisurely chats with storekeepers are not an option. I envy Matt’s ability to stop and really look at things in depth.

But even so, walking these neighborhoods has given me a shift in perspective. You can notice things while walking that you’d never see while driving. I find now that even when I drive through these neighborhoods, I see things more as a walker would. Will that wear off once we go back to being more car-centric? Or will all this street-level familiarity stick with me?

People talk about the silver linings of the pandemic. For me, these walks, the things I’ve discovered, my different relationship to the neighborhoods, and the routine with Nat, have been a definite silver lining. I’m not sure what of these walks will persist beyond the pandemic. I hope some of it does.

Older:

Jan 17:

Flourish

Nov 27:

Mad Libs