The myth of the myth of learning styles

Saturday 23 September 2023

In “Advice to beginners” I said, “Learn how you learn,” and many people stepped up to tell me that learning styles are a myth. I know the research about learning styles, but people are over-applying it to dictate how people should learn.

If you haven’t heard about the theory and its debunking, you can read about it in The Atlantic or Education Next. Briefly: the theory was that some people were inherently visual learners, while others were textual learners, among other kinds. This has been proven untrue.

When I said “learn how you learn,” I meant for learners to take an active role in choosing what their path should be. I’m not talking about the four modalities from the debunked “learning styles” myth. There are many effective ways to learn how to program, and you have to choose your way. There are lots of possible choices:

  • Do you want to start with computer science theory, or jump into writing small programs?
  • Do you want to know the inner workings of things, or just how to use them?
  • Do you want to work on small exercises, or choose a project of your own?
  • Once you’ve started a project, do you complete it, or are you willing to leave things unfinished because something else has drawn your attention?
  • Do you work a fixed amount of time each day, or do you adapt as your energy waxes and wanes?
  • Do you want to dive deep into one technology or language, or dabble in many?
  • Are you proactive (learn what you might need it later) or reactive (learn what you need right now)?
  • Does studying with others help you, or are you better off learning alone?
  • Are you choosing an industry, and aiming your work directly at it, or trying out many different domains?
  • Do you like videos, or reading text?

These aren’t always binary choices: you might be somewhere in the middle, or shift over time, or seesaw back and forth. Understand what works for you.

I saved videos vs text for last because it seems to be the most contentious. Many people will tell you that watching videos is a bad way to learn programming. I agree that passively watching videos can lull you into thinking you understand concepts. You must write code to truly cement your understanding. Videos can be a fine way to learn, as long as you are active in the learning. But some people will simply rail against videos entirely.

Some experts lean too hard on “learning styles are a myth” and “videos are bad.” This seems to be just another gate-keeping flavor of “the way I did it is the right way.”

The debunking of learning styles was important because teachers were being expected to develop curriculum in many styles to suit different students’ needs. This was extra work, and could also prevent the students from developing learning skills in other modalities.

But I’m not talking about teachers’ choice of modality. I’m talking about learners finding paths that work for them. Today we have an abundance of learning materials, and learners have to choose among them. Everyone has be empowered to use the ones that suit them. This is a much bigger range than visual/textual/auditory.

Learners have to learn how they learn, and choose the path and materials that work best for them.

Advice to beginners

Sunday 17 September 2023

I often see questions like, “I’m just starting to learn Python, any advice?” The expected answer is something like “watch this video,” or, “here’s an awesome tutorial,” but more important are some tips about how to learn technology in general. Here are mine:

Learn how you learn: Some people like videos, some like books. Some like bite-sized tips, some like hour-long tutorials. Figure out how you like to learn and do that. There’s no wrong answer, there are just different answers for different people. Update: I wrote more about this in “The myth of the myth of learning styles

Do things: If you aren’t sure if something will work, try it. Choose a project and start writing it. Lots of learning tools can be too passive. Write code. Write more code. Get your hands in there, do things and find out what happens. You’ll learn best by doing.

It doesn’t have to be new: Writing a project is a great way to learn, but your project doesn’t have to be innovative, or profitable, or new, or even useful. Sure, it would be great to have an idea like that, but you will learn even if you are building something that has been done many times before. Keep in mind why you are building it. You’re a learner, not an entrepreneur.

Be introspective: Think about what you’ve done, and what you like or don’t like about it. When you’re done with a project (or in the middle of one!), think about what you might do differently. Being self-aware about your choices and their results will teach you more than any tutorial.

Do it more than once: If you have questions about which path to take, try a few and compare the results.

Don’t measure yourself against others: “How long did it take you to learn Python?” It doesn’t matter. Everyone is on a different path, with different needs, different starting points, and so on. Learn at your best pace. It’s not a competition. For more of my thoughts on this: How long did it take you to learn Python?.

More languages isn’t better: Beginners will say, “I’ve learned Python, what language should I learn next?” Don’t measure your progress in number of languages. More important is to learn new techniques (database, web, testing, concurrency, etc.) in the same-old language. Rushing from language to language will probably make you a beginner in all of them.

Be open to change: Once you’ve made a choice, you can change your mind. Requirements change, technologies change, interests change. Nothing is written in stone. Explore where your interests take you.

Hammerspoon

Sunday 27 August 2023

For a long time now I’ve displayed a small textual info box in the otherwise unused upper-left corner of my mac screen. It was originally based on GeekTool, but a new laptop got me to re-think it, and now it’s implemented with Hammerspoon.

Hammerspoon is a Mac automation tool driven by Lua programs. It has an extensive API for integration with Mac facilities, so it can automate many aspects of the OS, potentially replacing a number of Mac utilities.

GeekTool was always a little odd and configured much of its behavior in fiddly property panels. Hammerspoon is fully driven by .lua files, so it fits my programmers’ world view better. GeekTool also seems unmaintained these days.

On my Mac, I hide the menu bar and move the dock to the left side. This maximizes the vertical space available to my own windows. But it leaves a small corner unused in the upper left. I have a small Python program that collects information often displayed in the menu bar (date, time, battery level, sound volume, etc). With Hammerspoon I can create a small canvas and display the program’s output text.

It looks like this:

A cramped info panel on my Mac

Here’s the Lua code that runs the Python from Hammerspoon and draws the canvas:

-- ~/.hammerspoon/init.lua

-- Text-mode "menu bar indicator" replacement
canvas = nil
function createCanvas()
    if canvas then
        canvas:hide()
    end
    local screen = hs.screen.primaryScreen()
    local frame = screen:frame()
    local fullFrame = screen:fullFrame()
    canvas = hs.canvas.new({
        x = fullFrame.x,
        y = frame.y,
        w = frame.x - fullFrame.x,
        h = 175,
    })
    canvas[1] = {
        type = "rectangle",
        action = "fill",
        fillColor = {hex="#D0D0D0"},
    }
    canvas[2] = {
        type = "text",
        frame = {x=2, y=0, h="100%", w="100%"},
        textFont = "SF Pro Text",
        textSize = 14,
        textColor = {hex="#000000"},
    }
    canvas:show()
    canvas:sendToBack()
    drawInfo()
end

function drawInfo()
    local openPop = io.popen("/usr/local/bin/python3.10 ~/bin/textstatuses.py")
    canvas[2].text = openPop:read("*a")
    openPop:close()
end

-- Start over when any screen geometry changes.
watcher = hs.screen.watcher.newWithActiveScreen(createCanvas):start()
-- Redraw every 10 seconds.
timer = hs.timer.doEvery(10, drawInfo)
-- Redraw when any audio setting changes.
for i, dev in ipairs(hs.audiodevice.allOutputDevices()) do
    dev:watcherCallback(drawInfo):watcherStart()
end

This has a few advantages over GeekTool: it’s entirely self-contained in a text file I can commit to git, it can listen for events to be more reactive, it can compute its location to take the menubar into account, and so on.

Theoretically, Hammerspoon can also replace other Mac widgets like Caffeine, Rectangle Pro, and so on. I haven’t tried replacing them all, but it’s probably in my future.

One interesting side-effect: learning Lua!

Update, September 2023: now the Python code has been replaced with all Lua code.

Alan Kay’s objects and arts

Sunday 6 August 2023

In 1993 (30 years ago!), Alan Kay wrote The early history of Smalltalk, explaining the origins of the language’s ideas, its development, and an assessment of its successes and failures. It’s a thought-provoking piece for a few reasons.

First, his description of the central idea of object-oriented programming:

Everything can be represented by the recursive composition of a single kind of behavioral building block that hides its combination of process and state inside itself.

(All his quotes here are slightly edited.)

He is of course describing objects. But notice his emphasis on the composition of objects. Many people these days offer the advice to favor composition over inheritance, which seems counter-intuitive to people raised on C++ or Java. Yet here is Kay, who coined the term “object-oriented,” literally using the word composition to describe the key idea.

He goes on to disparage object orientation being taught as a way to encapsulate data structures. He sees methods not as actions, but as goals, something I haven’t wrapped my head around. His original insight for objects was that each should be thought of as a complete computer, and complex systems could be recursively subdivided into smaller and smaller machines exchanging messages. It’s interesting to hear this idea in today’s world of networked micro-services.

Second, the piece is a real mix, moving from historical details about long-forgotten machines to philosophical asides (he references Plato, Leibniz, and Hobbes), to other computer luminaries of the time (Minksy, Papert, Perlis, Lamport, and so on).

Along the way are details about the difficulty of being advanced researchers inside a company like Xerox, finagling budgets and adjusting projects to get things done despite the environment.

Lastly, what motivated Kay was the idea that computing could become a personal activity. When he started, computers had rooms devoted to them. He struggled to build machines that could be carried. He wanted computing to be something that everyone could do, not just consuming content, but creating programs for computers to pursue individual ideas. In the ‘70s when the main work on Smalltalk was happening, computers weren’t used for content at all. When he said “computing” he meant programming and running programs. He saw computing as a liberal art: a way to see the world and a way to develop deeper understandings.

Because of this, his thinking about computing was intertwined with thinking about learning and teaching:

Knowledge is least interesting when it is first being learned. The representations—whether markings, allusions, or physical controls—get in the way and must be laboriously and painfully interpreted. From here there are two useful paths which are important and intertwined.

The first is fluency, which is building mental structures that disappear the interpretations of the representations. The letters and words of a sentence are experiences as meaning rather than markings.

The second is taking the knowledge as metaphor that can illuminate other areas.

The “trick”, and I think this is what liberal arts education is supposed to be about, is to get fluent and deep while building relationships with other fluent deep knowledge.

He calls this fluency with deep ideas “literacy,” and goes on to give a prescient example, including his dream of the ways people might use computers:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

The reason, therefore, that many of us want children to understand computing deeply and fluently is that like literature, mathematics, science, music, and art, it carries special ways of thinking about situations that in contrast with other knowledge and other ways of thinking critically boost our ability to understand the world.

Now 30 years later, we have not achieved Kay’s ideal. Everyone has a powerful computer, not just on their desk but in their pocket, but they are used for consumption and socialization, not computing. I won’t wring my hands about this: Kay’s ideal of liberal arts as a way to build deep fluency isn’t happening to the extent he would like outside of computing either.

I am interested in how to help more people learn computing in Kay’s sense. Highlighting fluency and metaphor is insightful. They are good north stars for education of all sorts.

Small talk

Friday 28 July 2023

One thing I wish I had mentioned in my PyCon 2023 keynote: small talk. Engineers often find small talk difficult. I know I’ve had trouble with it. It can feel like “talking about nothing.” But small talk helps build connections between people.

If you haven’t seen the keynote (you should watch it!), a central idea there is that our words carry both information and sentiment, that sentiment is important for being heard, and that missing sentiment is defaulted from history and similarity.

That model gives us a way to see the value in small talk, even if it’s about nothing: simple chatting about simple things can be positive interactions that build a history. Talking about non-work topics can uncover surprising connections, adding to a feeling of similarity. These both create a reserve of good sentiment to draw on in future interactions.

Talking with another engineer recently, they said, “small talk feels unnatural to me.” I think what they meant was that they weren’t intrinsically interested in the discussion. A good definition I heard once is that a nerd is someone who is unfashionably focused. Small talk won’t fall into their narrow focus, so it feels useless or boring.

In the keynote I make the point that engineers are good at learning new skills, and people skills are no exception. You can bring engineering approaches to getting better at people skills. Small talk can be difficult even with this in mind though: you can’t see the reservoir of good will you are building with someone.

As my engineer friend put it in classically nerdy terms:

It’s not clear if  conversations[-1].outcome == Outcome.GOOD

I get it, I’m not sure I’m that good at small talk myself. I don’t have pets, I’m not a foodie, I don’t drink beer, I don’t play video games, I don’t know anything about Magic The Gathering or anime, and so on. It can be hard to find something insubstantial but shared to talk about. And “insubstantial” doesn’t sound good in the first place.

Connecting with people is worth it, small talk is a good way to connect with people, and everyone can learn to be better at it. Did I mention I gave a keynote about interacting with people?

Untangle your own adventure

Sunday 23 July 2023

Boston Python runs weekly office hours, and I noticed the discussions there often start from a familiar point: how do I get my Python environment to work?

This is a question that recurs in any Python support arena. Python has gotten a lot of heat for environment complexity, but that complexity is due to a number of factors, none of them bad on their own: Python has been around for a long time, it’s used in many different ways, it lends itself to experimentation with tooling, and so on.

I don’t want to gripe about the complexity, and I don’t have a proposal for how to reduce the complexity. What I would like is a resource that people can use to find their way through the complexity.

I’m imagining a self-guided tour that would ask questions about what the user needs, and would bring them to pages with solutions. It would be similar to a Choose Your Own Adventure, but for Python environments. I’ve even started toying with tooling that could produce pages like that.

But there are a number of possible questions that people could be starting with:

  • How do I install Python?
  • How do I install modules to use?
  • How do I package my code for distribution?

Answering any of these well requires finding out details from the user:

  • What domain are you working in?
  • What kind of expertise do you already have?
  • How deeply do you want to be involved in the decisions?

And then more technical details:

  • What operating system?
  • What shell do you use, if any?
  • What editor/IDE do you use?
  • What version of Python?

and most of these might be answered with, “I don’t know, can you tell me which to use?” which just leads to more attempts to educate and explain complexities that to the user feel like distracting confusing trivia.

Even harder is the question, “I tried to do it myself, but it’s not working, how do I fix it?” Untangling that requires forensics and details, then education about the inner workings of things and how they got mis-configured.

As tempting as it is to try to capture all of the possibilities and solutions in a flow chart or adventure game, it’s almost certainly impossible to solve most peoples’ problems.

Has anyone seen examples of instructional materials organized in a decision tree like this? Not even about Python, but about anything? I feel like this is impossible, but if it could make a dent in the difficulties people are having, it could be very valuable. I can’t quite get the idea out of my head, and I can’t make a serious start on it.

What can I look at to learn from? Not learn about Python environments, but learn how to construct self-guided decision tree materials?

Update: Dennis Dawson reminded me that I previously blogged about On undoing, fixing, or removing commits in git, which has been updated to be more dynamic at Git Fix Um. These are a good example of the kinds of thing I am looking for.

More update: I got some more suggestions of examples of this sort of thing. SmartBear Online Troubleshooter is a classic wizard flow. There are a few for installation instructions:

More: Veracode cheatsheet and PyTorch Get Started.

Older: