Category: Insight

Don’t Curate the Data

Don’t Curate the Data

It’s tempting when we talk to others about our ideas to only want to share the good stuff.  To only share the things we think are logical, sound reasonable, maybe only the things we think (or hope) will make us seem smart and focused.  But this tendency to re-frame our real experiences and distill them into nice little stories we can tell people over coffee or a beer can be a dangerous setback to getting better at a new skill set.

 

Trying Too Hard to Look Good

 

Why?  Because sometimes we are so busy trying to think about how to tell (or should I say sell) others on what we’re doing or thinking that we scrub our memories clean of the actual messy chain of events that led us to come up with the polished version.  That messy chain, and every twist, turn, and chink in its construction, is the raw knowledge from which we can learn about how we, or others, actually accomplish things.  I’ll call it “the data.”

So this fear of how others will perceive our process is one thing that gets in the way of having good data about our process.  We start to curate the data to make ourselves more acceptable to others.

But we need this data to gain a meaningful awareness of what we actually do to produce a certain outcome.  This is even more important when we try to figure out how to reproduce a mental outcome.

Maybe you came up with a winning idea once, but now you’re not sure how to get the magic back.  Or maybe you want to pass your strategy on to a younger colleague or friend, but don’t really know what you did.  Maybe you’re hoping to learn from someone else who succeed at thinking up a breakthrough solution, but they say “I really don’t remember what I did.  It just sort of came together.”

Which brings us to a second thing that works against having access to good data about our own interior processes and patterns.  Memory.

 

Mining Memory is a Tricky Business

 

We all know we don’t have good memories, even when we are trying hard (studying for tests in school, or trying to remember the name of every person in a group of ten new people you just met are classic examples).  Memory is imperfect (we have weird, uncontrollable gaps in what we retain).  Memory is selective (we have a tendency to be really good at remembering what happened during highly emotional events, but not during more mundane or routine moments).  Memory is pliable (the more we tell and retell a version of something that happened to us, the more likely we are to lose the actual memory in place of our story version).

These tricks of memory not only frustrate us when we try to observe and learn from ourselves, but also when we try to learn from others.

There have been lots of interviews with famous scientists who made discoveries asking them about how they did it.  But their self-reported stories are notoriously unreliable or have big gaps because they, like us, are subject to the fickle whims of memory and the hazards of trying to tell your own biography one too many times.  Mining memory for useful insights is a tricky business.

So memory and lack of awareness (or mindlessness) cause us to lose access to the precious data we need to be able to see our behaviors and patterns from a larger perspective in order to learn from them and share them.

When I first started learning about scientific discovery, recognizing these pitfalls of bad memory and mindlessness caused me a lot of annoyance.  I would think of a great example of a scientific discovery, such as a discovery that shared similarities with an area or question I wanted to make discoveries in.  I’d think, “Perfect!  I’ll go read up on how they did it, how they discovered it.  What were they reading, what were they doing, who were they talking to?”  But of course, answers to those questions wouldn’t exist!

Maybe the discovery was of limited interest so nobody bothered to ask those questions and now the discoverer had passed away.  Or maybe the discovery was huge and world changing but the histories told about it tended to re-hash the same packaged myths—like Newton and the apple falling inspiring ideas about gravity, or Einstein taking apart watches from an early age leading to picturing little clocks when working out the effects on time of traveling near light speed in special relativity.  Part fact, part fiction, these stories leave hundreds of hours of more mundane moments, links in the mental chain, unilluminated.  Good data that could guide future generations gets lost, sacrificed on the altar of telling a whimsical story.

So when I sat down in September of 2018 to start trying to work out a more modern definition of scientific discovery—something pragmatic that you could use to figure out what to do during all those mundane moments—I kept thinking about how to better capture that process of obtaining insights, as you go.

That’s when I realized we already have the methods the problem is we always want to curate the story told after the fact.  And rather than curating the data that make it into the story (i.e., creating an executive summary and redacting some things), we end up actually curating the source data itself (i.e., never gathering the evidence in the first place).  In other words, rather than just leaving out parts of the story, we actually tune out to parts of the story as we are living it, so that we literally lose the memory of what happened all together.

But that story is the raw data that fields like metascience and the “science of science” need to help figure out how scientists can do what they do, only better.  And as scientists we should always be the expert on our own individual scientific processes.  The best way to do that is to start capturing the data about how you actually move through the research process, especially during conceptual and thinking phases.  Capture the data, don’t curate the data.

 

A Series of Events

 

Let me give you a real life example to illustrate.  As I said, I sat down to try to come up with a new definition of scientific discovery.  I’m a physicist by training.  Defining concepts is more a philosopher’s job, so at first I had a hard time taking myself and any ideas I had seriously.  I got nowhere for three months; no new ideas other than what I had already read. Then one day a series of events started that went like this:

I read a philosophy paper defining scientific discovery that made me very unhappy.  It was so different than my expectation of what a good and useful definition would be that I was grumpy.  I got frustrated and set the whole thing aside.  I questioned why I was studying the topic at all.  Maybe I should stick to my calling and passion, physics.  I read when I’m grumpy, in order to get happy.  So I searched Amazon.  I came across a book by Cal Newport called So Good They Can’t Ignore You.  It argued that passion is a bad reason to pursue a career path, which made me even grumpier; so grumpy I had to buy the book in order to be able to read it and prove to myself just how rightfully disgruntled I was with the premise.

Newport stresses the idea of “craftsmanship” throughout his book.  I was (and still am) annoyed by the book’s premise and not sold on its arguments, but “craftsmanship” is a pretty word.  That resonated with me.  I wanted to feel a sense of craftsmanship about the definition of scientific discovery I was creating and about the act of scientific discovery itself.

I didn’t want to read anymore after Newport.  So I switched to watching Netflix.  By random chance I had watched a Marie Kondo tidying reality series on Netflix.  Soon after, Netflix’s algorithm popped up a suggestion for another reality series called “Abstract: The Art of Design.”  It was a series of episodes with designers in different fields, like architects, Nike shoe designers, theater and popstar stage shows set designers, etc.  It pitched the series as a behind the scenes look at how masters plied their craft.  Aha, craftsmanship again!  What coincidence.  I was all over it (this was binge watching for research, not boredom, I told myself).  I was particularly captivated by one episode about a German graphic designer, Christoph Niemann, who played with Legos, and whose work has graced the cover of The New Yorker more than almost any other artist.  The episode mentioned a documentary called “Jiro Dreams of Sushi.”

Stick with me.  Do you see where this is going yet?  Good, neither did I at the time.

So I hopped over to Amazon Prime Video to rent “Jiro Dreams of Sushi” about a Japanese Michelin rated chef and his lifelong obsessive, perfectionist, work ethic regarding the craft of sushi.  At one point the documentary showed a clip of Jiro being named for his Michelin star and they mentioned what the stars represent: quality, consistency, and originality.  Lightbulb moment!  Something about the ring of three words that summed up a seemingly undefinable craft (the art of creating delicious food) felt like exactly the template I needed to define the seemingly undefinable art of creating new knowledge about the natural world.

So I started trying to come up with three words that summed up “scientific discovery”.  Words that a craftsman could use to focus on elements and techniques designed to improve their discovery craft ability.  There were more seemingly mundane and off-tangent moments over a few more months before I came up with the core three keywords that are the basis of the definition I am writing up in a paper now.

The definition is highly unique, with each term getting its own clear sub-definition that helps lay out a way to critically examine a piece of research and evaluate it for its “discovery-ness”, i.e., its discovery potential or significance.  It’s also possible to quantify the definition in order to try and rank research ideas relative to one another for their discovery level (minor to major discovery).

It’s a lot better idea than some of the lame generic phrases that I came up with in the early days, like “scientific discovery is solving an unrecognized problem ” (*groan*).

On an unrelated track at that time, I was reading Susan Hubbuch’s book, Writing Research Papers Across the Curriculum, and had come across her idea that you create a good written thesis statement by writing out the statement in one sentence and then defining each keyword in your statement using the prompt “By <keyword> I mean…”.  So then I took the three keywords I had come up with and started drafting (dare I say crafting?) their definitions in order to clarify my new conception of “what is scientific discovery?”

So that’s the flow…my chain of discovery data:

Reading an academic paper led to disgust; disgust led to impulse spending; impulse spending brought in a book that planted the idea of craftsmanship; craftsmanship led to binge-watching; binge-watching led to hearing a nice definition of something unrelated; the nice definition inspired a template for how to define things; and simultaneously reading a textbook suggested how to tweak the template to get a unique working definition down on paper.

How do I know all this?  I wrote it down!  On scraps of paper, on sticky notes, in spiral notebooks, in Moleskines, in Google Keep lists, Evernote notes, and One Note notes (I was going through an indecisive phase about what capture methods to use for ideas).

I learned to not just write down random thoughts, but also to jot down what inspired the thought, i.e., what was I doing at the moment the thought struck—reading something, watching something, eating something, sitting somewhere, half-heartedly listening to someone over the phone…(Sorry, Mom!)?  Those are realistic data points about my own insight process that I can use later to learn better ways to trigger ideas. (And, no, my new strategy is not just to watch more Netflix.)

 

Make a Much Grander Palace of Knowledge

 

Instead of trying to leave those messy, mundane, and seemingly random instigators out, I made them part of my research documentation and noted them the way a chemist would note concentrations and temperatures, a physicist energies and momenta, a sociologist ages and regions.

And then I promised myself I wouldn’t curate the data.  I wouldn’t judge whether or not impulse book buying is a great way to get back on track with a research idea, or whether or not Marie Kondo greeting people’s homes with a cute little ritual is a logical method of arriving at a template to devise operational definitions.  I wouldn’t drop those moments from memory, or my records of the research, in order to try and polish the story of how the research happened.  I’ll just note it all down.  Keep it to review.  And maybe share it with others (mission accomplished).

Don’t curate the data, just capture the data.   Curation is best left to analysis, interpretation, and drawing conclusions, which require us to make choices—to highlight some data and ignore other data, to create links between some data and break connections among other data.  But think how much richer the world will be if we stop trying to just tell stories with the data we take and start sharing stories about how the data came to be.  The museum of knowledge will become a much grander palace.  And we might better appreciate the reality of what it is like to whole-heartedly live life as a discoverer.

 

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Don’t Curate the Data”, The Insightful Scientist Blog, August 2, 2019, https://insightfulscientist.com/blog/2019/dont-curate-the-data.

 

 

[Page Feature Photo: The gold dome in the Real Alcazar, the oldest used palace in Europe, located in Seville, Spain. Photo by Akshay Nanavati on Unsplash.]

Misfits Matter

Misfits Matter

How to use the trial and error method to make a scientific discovery.

 

I like moving, exploring new places, and visiting friends and family (for short manageable doses).  I can put up with traveling for work.  But one thing never ceases to annoy me:  Whenever I take a shower for the first time in a new place, I can’t for the life of me get the knobs, handles, and faucets to work right the first time.  I spend at least five minutes trying to get the water to stop being boiling or freezing, or trying to get the dribble out of the shower head to be decent enough to rinse.  Maybe you can relate.

But I’ll bet it never occurred to you that how you solve this problem is really scientific discovery skills in action:  you start fiddling with all the water controls you can see.

That’s because it’s is a classic example of doing the right kind of trial and error.  So I’ll use it to outline what I think are four key dimensions that help structure trial and error for discovery:

  1. Putting in the right number of trials
  2. Putting in the right kinds of trials
  3. Putting in the right kind of error
  4. Putting in the right amount of error

The overall theme here is this — it ain’t called “trial and success” for a reason.  The errors are part of the magic…that special sauce…the je ne sais quoi…that makes the process work.

You may have seen versions of this idea in current business-speak around innovation and start-ups (the Lean build-measure-learn cycle anyone?).  But I needed to take it out of the entrepreneurial context and put it into a science one.

So let’s get down to brass tacks and talk about important aspects of trial and error.

 

4 Goals for Thoughtful “Trial and Error”

 

I’m going to keep the shower faucet analogy going because it’s straightforward to imagine hitting the goals for each dimension.  But to give this a fuller scientific discovery context I’ll add one technical example at the end of the post.

 

Dimension #1 — On putting the right number of trials into your trial and error.

 

Goal:

Keep running trials until you gain at least one valued action-outcome insight.

 

When you start out on a round of trial and error you are really aiming for complete understanding and the skill to make it happen on demand, with fine control.

In our shower analogy, that means it’s not just enough to know how to get water to come out of the spout.  You need to be able to control the water temperature, the water pressure, and make sure it comes out of the shower head and not the tub spout (if there is one).  Ideally, you’d learn enough to be able to manipulate the handles to produce a range of outcomes:  the temperature sweet spot for a summer day shower or a winter one; the right pressure for too much soap with soft water or for sore skin from the flu.

So one of the first things you have to figure out is: how do you know when to stop making trials?

This isn’t a technical post about conducting blind trials or sample surveys.  Here we’re talking about a more qualitative definition of done; the kind of thing you might try for an “exploratory study”.  Exploratory studies are the kind where you have no hypothesis going in.  Instead, you’re trying to find your way toward an unknown valued insight, not trying to prove or disprove a previous hypothetical insight.

The whole point of trial and error is to take a bunch of actions that will teach you how to create desired results by showing you what works (called “fits”), what doesn’t work (called “misfits”), and forcing you to learn why.

The “why” is the valued insight you’re after.

If you’ve run enough trials to figure out how to make something happen, that’s good, but not enough.  For scientific discovery you need to know precisely why and precisely how it works.

So keep running trials until you’ve come up with an answer to at least one why question.

 

Dimension #2 — On putting the right kinds of trials into your trial and error.

 

Goal:

Try a mixture of fits and misfits.

 

A key facet of trial and error is that by intentionally generating mistakes it will help create insight into how to generate success.

Partly, these trials are about firsthand experience.  Your job is to move from “wrong-headed” ideas to “right-tried” experiences.  To make changes to how you operate you have to clearly label and identify two things in your trial and error scenario–“actions I can take” and “results I want to control”.

Good trial and error means that you will: (1) learn the range of actions allowed; (2) try every possible major action to confirm what’s possible and what’s not; and (3) learn from experience which actions produce what outcomes.

In the last section I brought up the terms fit and misfit: in some science work, getting a match between an equation you are trying and the data is called a “fit” and getting a mismatch between the two is called a “misfit”.

So in science terms, that means you want your trials to be a mixture of things you learn will work (fits), things you learn won’t work (misfits), and, if possible, things where you have no idea what will happen (surprises).

For my shower analogy, let’s use a concrete example: the shower in my second bathroom, which both my mom and aunt have had to use (and, rightfully, complained about).

A photo of the handles that control the shower in my guest bathroom in my UK apartment.

So, for “actions I can take”: rotate left handle, rotate right handle, or pull the lever on the left handle.  And for “results I want to control”: the water temperature and the amount of water coming out of the shower head.

Then, I start moving handles and levers individually.  Every time I move a handle and don’t get the outcome I want, it’s a mistake.  But I’m doing it intentionally, so that I can learn what all the levers do.

Many of these attempts will be misfits, producing no shower at all or cold water or whatever.  Some may accidentally be fits.  Hopefully, none will produce surprises (though I have had brown water and sludge come out of faucets before).

I think this visceral experience is what allows your mind to stop rationalizing why standard approaches and methods should work and get on with seriously seeking out new and novel alternatives that actually work.

And these new and novel alternatives, with their associated insights, are the soul of scientific discovery.

So you want to move into this open-minded, curious, active participant and observer state as quickly as possible and trying fits and misfits will help you do that.

 

Dimension #3 — On putting the right kind of error into your trial and error.

 

Goal:

Make both extreme and incremental mistakes.

You know the actions you can take.  But you need to figure out why certain actions lead to certain results.

One great way to do this is to try the extreme of each action.

If it’s safe (or you have a reasonable expectation of safety) then pull the lever to the max, rotate the faucet handle all the way, cut out almost everything you thought was necessary, and see what happens.

In physics, this goes by the name “easy cases”.  What we really mean is use the extreme values, zero, negative infinity, or positive infinity.  Plug them in to your model and see what happens.  Does it break things?  Does it give wonky answers?  Does it lead to a scenario where the role of one term in the equation becomes clearer?

That’s the beauty of extreme tests when you’re doing trial and error.  They let you crank up the volume on factors so that you can pinpoint what they might do, how they might operate in your context.

So what about making “incremental” mistakes?  Just nudging things a little this way and a little that way to see what happens?

These are absolutely necessary too, and tend to happen later on in your trial and error process.  They are a great way to confirm and refine your understanding.

If you want to boil it down, making mistakes at the extreme ends of the action cycle hones your “this-does-that” knowledge, while making mistakes in small incremental steps helps clarify “how” knowledge.

So, often times, it’s best to go after extreme cases in the early trials and then move toward incremental cases later on.  For example, with the shower handles, early on you’ll probably try rotating one handle all the way to the right or left to figure out which direction brings hot water.  Later on, you’ll turn the handle a little bit at a time, until you get the right temperature.

 

Dimension #4 — On putting the right amount of error into your trial and error.

 

Goal:

Make mistakes until you can link all major actions with outcomes.

 

This one is easy enough to grasp.  To put it more bluntly: how many times should you mess up on purpose?

The goal statement says it all: make enough mistakes that you can link all major actions with outcomes in your mind, and you know why they are linked the way they are.

Just imagine if you were told that every move you made to try and set a shower, where you didn’t know the knobs at all, had to only be moving toward the right outcome (no errors allowed).  How the heck would you succeed?  You would have to look up a manual, or find someone who had used the shower before.  It would probably slow the process down to a painstaking pace.  It would stress you out.  And it would need pre-existing insight into how to do it right.

But in discovery, you won’t have that kind of prior insight.  No one does.  So you have to be willing to gets things wrong in order to start to generate that insight.

So keep getting it wrong in your trials until you really get why it doesn’t work.  Don’t avoid those misfit moments.  You should be able to make a table or a mind map of links between actions and outcomes.  If you can’t, keep making errors until you can.

 

The Four Trial and Error Dimensions in a Real Physics Research Example

 

I promised I would connect the ideas I’ve talked about to a science example, so let me do that:

For my Ph.D. neutrino physics work, at one point I had to write a piece of computer code that could reproduce a final plot and numbers in an already published paper, by the MINOS neutrino oscillation experiment, to make sure our code modeled the experiment well.  First, I wrote some code (to estimate the total number of neutrino particles we predicted this experiment to see at a certain energies) based on how my research group had always done it.  Then I wrote down in my research notebook how the existing code had previously been tweaked to produce a good match.  One value had been hand-set, by trial and error, to fit.

In the newer data published at the time, we knew this tweak no longer worked.  But at first I just tried it anyway (try misfits).  Then I started changing the values in the code (make incremental changes).  And we added a few new parameters that we could adjust and I altered those values (try unknowns).  I kept detailed hand lists of the results of my changes on the final output numbers (link actions to outcomes).

Then I synthesized these behaviors into new groupings: did it make the results too big, too small, by a little, by a lot?  Did it skew all the results or just the results at certain energies?  Was it a consistent overall effect, or some weird pattern effect?

At this point I kept many code versions to be able to have a record of the progression of my trials (fancy versioning software isn’t commonly used in small physics groups).

A screenshot showing some of the folders and files from my Ph.D. computer codes that required trial and error.

And I did handwritten notes where I worked through why certain outcomes weren’t produced and others were (try until you get insight).

[3d-flip-book mode=”fullscreen” urlparam=”fb3d-page” id=”1206″ title=”false”]

 

Then I did it again.  And again.  And we did it for 10 more experiments totaling…well, a LOT of code.

In the end we got a good match and we were able to use it to complete my Ph.D. work, which explored the impact of a mathematical symmetry on our current picture of the neutrino particle.

So, trial and error, being able to willfully make mistakes to gain insight, can be incredibly powerful and remains a uniquely human skill.

As a 2011 study from Nature suggested, non-expert video gamers (i.e., many with no education in the topic beyond high school level biology) out-predicted a world-leading machine algorithm, designed by expert academic biochemists and computer scientists, in coming up with correct 3-D protein shapes, because they made mistakes on purpose while generating intermediate trial solutions.

Algorithms, by design, are constrained to do only one thing: get a better answer than they had before.  Every step must be forward; even temporary small failures are not allowed.

But we’re messy humans.

We can take two steps back for every one step forward, or even cartwheel off to the side when the rules say only walking is allowed.  Our ability to strategically move in “the wrong direction” (briefly taking us farther away from a goal) in order to open up options that in the long-run will move us in “the right direction” (nearer the goal) is part of our human charm and innate discovery capacity.  But that requires we acknowledge up front that in pursuit of discovery many trials will be needed, and many of them will not succeed.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Misfits matter.

Using trial and error in a conscious, structured way can move use from having thoughts on something to experiences in something.  Notice how “thoughts on” speaks to the surface, like a tiny boat on a broad ocean; while “experiences in”, speaks to the depths, like a diver in deep water. So try.  And err.  Welcome error by remembering that misfits matter and that a deep perspective is where radical insight awaits.  In taking two steps back for every one step forward, those two steps back aren’t setbacks, they’re perspective.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I shared the four dimensions that help define strategic trial and error: putting in the right kind and number of trials, and putting in the right kind and amount of error.
  • I shared an example of how trial and error has been used in my own physics work and in biology to get useful insights.

Have your own recipe or experiences related to trial and error?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – “Insight”, Wikipedia entry, https://en.m.wikipedia.org/wiki/Insight.
  2. Web article – Ed Yong, “Foldit – tapping the wisdom of computer gamers to solve tough scientific puzzles” Discover magazine website, Not Exactly Rocket Science Blog, August 4, 2010, http://blogs.discovermagazine.com/notrocketscience/2010/08/04/foldit-tapping-the-wisdom-of-computer-gamers-to-solve-tough-scientific-puzzles/#.XKPkLaZ7kWo.
  3. Website – MINOS neutrino oscillation experiment, http://www-numi.fnal.gov/.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Putting the Error in Trial and Error”, The Insightful Scientist Blog, March 22, 2019, https://insightfulscientist.com/blog/2019/misfits-matter.

 

[Page Feature PhotoAn ornate faucet at the Hotel Royal in Aarhus, Denmark. Photo by Kirsten Marie Ebbesen on Unsplash.]

The Re-Education of an Educated Mind

The Re-Education of an Educated Mind

I once told a fellow graduate student at a nuclear physics summer school that, “I don’t speak math.”  He found this very funny, and me very funny.  But I absolutely meant it.  In fact, I was angry about it.  By that time, I had already met the sleep-depriving scientific discovery question I’ve dreamed of answering for the last decade.  I had been trying to solve it.  It’s why I attended the nuclear physics summer school at all.  It was considered “outside my area”, since my Ph.D. advisor and I had agreed I would declare my concentration as particle physics.  I thought gaining more knowledge would help me make progress.  But then I discovered that I don’t speak math.  I read math.  I calculate math.  I derive math.  But I don’t speak math.

In my current conception of the scientific discovery cycle the flow goes like this: question → ideation → articulation → evaluation → verification, with constant feedback between phases, and the ability to reset to an earlier phase as needed.  At the time of the nuclear physics summer school, I had the question in mind and I’d come up with three possible ideas for answers.  But my efforts completely died at “articulation”, re-phrasing my mental conceptualization of each answer as mathematical equations, because I didn’t speak math.

What do I mean by “speak” math?  And how is this different from reading and calculating?

Put it in another context.  As someone who idly studied five languages besides my native English (no, I can’t speak them all now) and who has a parent who raised me as semi-bilingual and does her professional work in at least two languages, I’ve experienced the feeling of “reading without speaking” many times before.

“Read” means I can identify things on signs that I’ve memorized or seen before.  “Read” means that I can sometimes derive related things, like word signs for the women’s toilet in a restaurant versus the signs I saw at the airport.  “Read” means I can muddle through restaurant menus, especially if there are pictures.

“Speak”, on the other hand, means I can mention to a restaurant server that the ladies’ room is out of toilet paper.  “Speak” means I can make a special meal request that’s not on the menu at all.  “Speak” means I can compose a Physicist’s Log entry about scientific discovery, even when I’m not sure how to define it, how to describe it, or how to achieve it.

“Read” means recognition, “speak” means “creation”.  While I can read math just fine, I can’t create new mathematical expressions with meaning off the top of my head, the way I can churn out sentences in a log entry.  Because I “can’t speak math”, there’s a bottleneck in my discovery cycle, right at the phase of articulation.

I’ve spent years since that summer school digging around looking for practices to help relieve the bottleneck:  Do more math! (Funny how more reading doesn’t equal better speaking.)  Try Fermi questions! (Back of the envelope calculations to answer odd questions about everyday life; but mostly just add and multiply things.)  Just practice modeling!  (Writing down just the starting equation, given any kind of physics word problem.  But this assumes you already know the physics and just need to recognize it in the problem.  What happens when nobody knows the physics yet?)

It wasn’t until I started studying cognitive psychology and scientific discovery that I came across a new option in a book called Where Mathematics Come From:  How the Embodied Mind Brings Mathematics Into Being, written by George Lakoff and Rafael Nunez, a linguist and a psychologist team who study the mind and mathematics.  Their theory is simple: all mathematics comes from lived sensory-motor experience that we then translate into the domain of mathematics via conceptual metaphor.  ALL mathematics; addition, subtraction, the concept of numbers, imaginary numbers, algebra, trigonometry, and on and on.  The final case study they do of the famous Euler equation and all the conceptual metaphors it requires is fascinating.  Most interesting in their theory is the sense that mathematics is not just derived (recognized, manipulated, objectively discovered), but that it can also be contrived (built, constructed, subjectively created).

In Lakoff and Nunez’s scheme, one could learn to speak math.  One could learn to construct mathematical expressions in the same way we construct sentences by consciously, explicitly building math expressions based on careful selection and combination of the underlying embodied metaphors (and still strictly adhering to the operational ground rules of math).  That this is based on conceptual metaphor (closely aligned to analogy and, hence, scientific discovery), and that the metaphors are based on physical experience (suited to a physics focus on the natural world), was music to my ears.

So, I may not speak math yet.  What’s more, taking Lakoff and Nunez’s approach may require a little re-education when it comes to how I think about math.  But now I know speaking math is possible.  And in the pursuit of scientific discovery, the re-education of an educated mind is a small price to pay to keep the discovery cycle alive.

A Good Map is Hard to Find

A Good Map is Hard to Find

The idea of mapping information is heavily used and widely favored today.  There are mind maps, geographical terrain maps, all manner of mathematical graphs to map relationships, and maps for “landscape analysis” used to summarize the state of the art in many fields.  But it turns out that when I look around the discovery literature a good map is hard to find.

Clearly I am biased (as evidenced by “Spark Point” and “The Idea Mill”) toward thinking about things in a map-like framework of (1) focusing on key points and connections, and then (2) refining and re-articulating those elements into a nice, neat shareable package.  At that stage, to me, the map becomes an externalized physical model that can be manipulated and played with, letting you toy with the underlying knowledge cluster sketched out by the map.  And going back to “The Physicist’s Repertoire”, if scientific discovery involves both content and skills then one might want at least one map outlining each arena.  So what kind of map might I use?

Mind maps are the easiest choice—free software or pen and paper, associative thinking, unconstrained.  But mind maps are so free form that the permutations are endless, making it hard to assess if adaptations of the map are fruitful; there can be too many options to try.  Luckily, I came across two other maps that seem to me to have more promising bones.

One is called a “territory map” from Susan Hubbuch’s book Writing Research Papers Across the Curriculum.  It lays out central points in a topic, the hierarchy of points, the direction of ideas between points, and the relationship between points.  This may just have been devised as a drafting device, but it strikes me as a potential foundation for a research tool.  If one laid out a set of knowledge, like scientific discovery skills, as Hubbuch suggests then you would have a territory map representing what is known, perceived, or believed.

Then you could play “what if?”  What if a given sub-hierarchy changes, or a directional was reversed, or relationships were added or subtracted?  Now since Hubbuch’s territory map also has built into it a “beginning” and an “end” (again, it’s designed for drafting a paper with an introduction and a conclusion) then that means there is an overall flow from foundation points to supported conclusion.  So, in a skills map, could this flow run from actions taken to supported outcomes?  In other words, could it be fashioned into a draft of a decision-making tool (more usually called a decision tree)?  If so, it could be a powerful way to articulate and refine scientific discovery paths.

Another possible type of map comes from Sanjoy Mahajan’s The Art of Insight in Science and Engineering, in a chapter outlining the technique of using “easy cases” to reduce complexity in order to foster insight.  The author calls it an “easy-cases map” and it’s essentially a flow chart showing the change of a wave equation between ocean regimes and the physical meaning of each regime.  It caught my eye because I once studied the reflection of sound waves, for submarine sonar under various ocean conditions, as part of a high school internship.  And I never felt I actually grasped the relationship between domains of different ocean conditions.  Where was this map 20 years ago?!  Better late than never I guess.

Mahajan’s map-like synthesis, especially between regimes bounded by some key variable or other (which is all-pervasive in physics), strikes me as so potentially useful.  Mahajan’s mathematical map is very much the counterpart to Hubbuch’s conceptual map.  The more variations of either map you have, for the same question or discovery goal, the more you can explore.  Because once something is mapped then you can compare maps for similarities and differences—it’s a powerful multipurpose abstraction.  The key would always be to capture the most “useful” features in a map, so that the meaningful similarities and differences that can act as a spark point for discovery jump out at your perception (which is much faster than cognition).

For now, I have started drafting my first map of discovery strategies and also one of open questions in neutrino physics.  The process will surely be iterative.  But who knows: I may find that the act of mapping and iterating itself will have a part to play in my pursuit of discovery, and in any case, when you’re out pioneering you can never have too many maps.