Category: Scientific Discovery

Point of Origin

Point of Origin

 

On the influence of tracking the evolution of your ideas on the pace of discovery.

 

Have you ever moved, or had a big change in your situation, and when you started sorting through everything you wondered why you kept it all?

I have been looking through all the handwritten paper notes I scanned just before I left England (…more than 1,476 sheets of notes!…).

They are red, purple, and green block notes.  Mysterious half-sentences jotted down in equally bright felt tip pen.

They are small Moleskine pages stained with decaf coconut flat whites.  Coffees bought at the chain Pret a Manger for my morning tram ride to work in England.

And they are neatly laid out calculations on blank pages.  Carefully crafted while sitting at my temporary desk.  Each time anxiously listening for the buzz of wasps through the open window in high summer in a building with no air conditioning.

Why did I keep them all?

Because I believe in the value of tracking the evolution of your ideas.

I think it can emphasize when you are harping on the same old theme.  It can point out when you have failed to try something different.  And it can highlight when you have made progress over the course of time.

All of this evidence can speed up the pace at which you gain new insights, and hence the pace of discovery.

Tracking ideas can also remind you of the reality of how you actually arrived at some inflection point in your progress.

And it can pinpoint when you suddenly veered into promising territory.  (In the lean innovation and startup world, this same concept is called a pivot point in product development.)

 

In principle, tracking the evolution of your ideas speeds up discovery because we have bad memories

 

We have very selective memories.

I won’t quote a bunch of psychology literature here since most of us will recognize from experience the existence of the following ideas.

How many times have you argued with a family member or colleague about something they say they don’t remember happening?

Psychologically we do have selective memories, a result of “selective attention”.  We only retain some things as important enough to remember and other things we ignore.

Have you ever debated with a family member or colleague and they loop back to the same argument?

Sometimes, no matter how many times you state your case from a different angle, they keep coming back to the same point.  A point you think you’ve already rationally and calmly explained to them is no good.

Our brains do literally have a thinking pattern, called “einstellung”, in which they get stuck on a particular loop that is more accessible in our memory.  Our brain can’t get past that idea to try other solutions or take other lines of thought.

Another trick our mind plays on us is to engage in something called “sunk cost bias”.

This is the belief that items we have invested our personal time and money in are more valuable than they actually are.

So once you’ve latched on to a particular train of thought (or your colleague with the “crazy theory” has) the more time you spend on it, the more convinced you’ll be that it’s valuable.

(Unfortunately, we also have a mental predisposition to believe that more complex theories are more likely to be true than simpler ones).

The point is, our minds are not perfect repositories and mirrors.

Our memories don’t capture in exact detail everything that happens to us.

And our  minds can’t reflect back to us precisely what need, when we try to recall a set of events or information.

But science is full of discoveries that were driven by personal events and private internal themes.

These themes kept driving the discoverer to make certain idiosyncratic and, it turns out, progressive choices at different points along their path.  (To see an example of this at work in someone other than our beloved Albert Einstein, see the link on the discovery of high-temperature superconductivity in American Scientist below).

In some cases, these discoverers were aware of these themes in their choices, but at other times they were not.

So imagine how powerful it would be if you could see these themes, as they play out.

Powerful why?

Because being able to see the evolution of your ideas and themes would give you the ability to change themes at will. It would also allow you to recognize nontraditional inputs, linked to the theme, that might also push you toward discovery.

Hoping to recognize your evolution and thematic drivers by chance is bound to be slower, a sort of random walk.  In contrast, doing so with intent is an efficiency-driven algorithm.

 

Being holistic, tracking the evolution of ideas mobilizes and harmonizes environmental forces to speed up discovery

 

Not only would knowing your own intellectual history and ancestry help you make discoveries faster, but a realistic picture of how discoveries are made would enable powerful social forces to come into play.

At the level of policy, having a clear awareness of what it takes to make a discovery would allow more supportive policy making decisions.  This means knowing how long, by what actual means, with exposure to what themes and ideas, and according to what personal choices a discovery was made.

At the group or organizational level, having an honest and holistic understanding of the scientific discovery process allows a group to better synchronize with discovery goals.  It may highlight when bringing in a new person, a new department, or a new topical theme is useful.  Or it can elucidate when new resources or more time are best given to the team already present to incubate discovery.

 

In practice, tracking the evolution of your ideas can be achieved through two activities

 

On a practical level, tracking the evolution of your thoughts requires two different mindsets to be at play (though not at the same time) as you move through your investigation process.

Let’s call them the “logging mind” and the “reflecting mind”.

(In the study of learning, related concepts are the “focused mind” and the “diffuse mode mind”, respectively).

These two mindsets naturally lead to two sets of activities to engage in during the investigation process, when you’re trying to track your intellectual heritage.

The first activity uses the logging mind and is where you record your exposure to various ideas, themes, individuals, sources, and activities.

I have alternately logged these things on sticky notes, in notetaking apps on my phone, in spiral notebooks, and on block notes, over the years.

In the last two years I have started to record, along with a one-sentence reference to each item, one of two additional tags added to the item.

Take for example the cryptic block note, “Network Analysis”.

The first tag might be a place, such as “Chicago conference on CEvNS”.  (Or tags might be simpler like “Nashville, TN” or “Schipol Airport”).

The second tag might be a date such as “F.11.22.2018”.  (The “F” stands for Friday.  I use M, T, W, R, F, S, and U for the days of the week).

I find the combination of these two tags and a note allow me to bring up in my memory, by association, what I was doing, how I came in contact with the item, and why it struck me as important.

(Sometimes I can rely on just the date tag, if it’s memorable enough.  For example, around the date I moved U.S. states or countries, birthdays, holidays, and very sad family events stick with me.)

This associative thinking mode is actually much more reliable than a chronological one.

Research has shown that our minds are especially good at recalling visual-spatial information—such as places.  (This is famously used in the “memory palace” or “method of loci” technique by world champion memory athletes).

So for the conference tag example above, upon seeing the item, I might even be able to remember:

  • where I was sitting (the lobby of the University of Chicago Physics Department building eating a Starbucks snack),
  • what I was wearing (a much loved fuchsia and burgundy flannel shirt with a favorite pair of Italian Murano glass earrings),
  • the internal conversation I was having (about using network analysis of publications on a scientific topic to inform community white papers and roadmap documents), and
  • what had just happened that made me jot down the note (interviewed researcher Andrey Rzhetsky about an article he co-authored using network analysis to track the efficiency of group discovery in science).

 

The second activity uses the reflecting mind and is where you record your reactions and responses to the investigation process and the items recorded in the logging mind activity.

For example, keeping a research journal and “freewriting” about what you are thinking at regular intervals can work.  Just be sure to include personal details, such as what is going on in your life and environment.  And note your personal reactions towards events and evidence (a “reflecting mind” activity).

You’ve also seen how piecing together a train of thought, which is what you do with the “reflecting mind”, can lead you to an awareness of what is affecting your work and what themes are driving your process.

For example, I shared with you the Netflix-driven incidents that honed my working definition of scientific discovery in another post (“Don’t Curate the Data”, see link below).

That train of thought came to me after reading a bunch of philosophy literature.

Feeling dissatisfied with what I had read, I found myself unable to purge the language and ideas others had used and move in a different direction.

To get past this kind of einstellung, I made a lateral move.  Instead of reading more I watched TV.

I browsed according to what themes called to me—craftsmanship, a sense of honor, nobility, care, handcraft, and diligence—and which I felt defined the spirit of scientific discovery.

These new spark points were not enough for an operational definition testable in the lab, but they were enough to guide me toward different themes.

I was very diligent about capturing my thoughts on block notes at the time.  So, I was able to recognize the old themes that were causing me dissatisfaction—categorization, thought, chronology—and consciously turn toward new themes that I wanted to include—quantitative, applied, craftsmanship.

Then I actively based my new efforts on that mental shift.

Within two weeks I had generated my own new definition of scientific discovery that I have not come across elsewhere in the literature, after six months of trying to come up with something new.  (And I am working on putting together historical case studies that illustrate the merits and shortcomings of this definition, for publication in a peer-reviewed journal).

But without being able to look at my point of origin, even if only at one turn in my path, I would not have been able to consciously make this mental shift.

This kind of clear-sighted awareness and finesse is what more discoverers need to help them make smart choices and shift their thinking when the situation calls for it.

 

By analogy, tracking the evolution of your ideas is making visible an invisible maze

 

I have seen many versions of how to track the evolution of your ideas.

I’m still working on finding my own best way, which supports my intention of becoming a Maestra of scientific discovery and the scientific discovery process.

Sometimes trying to find our way toward a discovery feels like an invisible maze where we encounter many dead ends, or end up right back where we started.

By keeping a record of our thoughts and influences we make the maze visible.

And we give ourselves an aerial view of our point of origin and the paths we have traced out in our minds and with our actions.

Knowing your point of origin and where your thoughts have wandered can help speed you toward undiscovered territory, by showing you the paths less travelled.

 

Interesting Stuff Related to This Post

 

  1. Gerald Holton, Hasok Chang, and Edward Jurkowitz, “How a Scientific Discovery Is Made: A Case History”, American Scientist, volume 84, July to August, pages 364-375 (1996), freely available on Researchgate from one of the co-authors at https://www.researchgate.net/publication/252275778_How_a_Scientific_Discovery_Is_Made_A_Case_History.
  2. Daphne Gray-Grant, “Why you should consider keeping a research diary”, Publication Coach, October 23 (2018), https://www.publicationcoach.com/research-diary/.
  3. Memory palace technique at the Memory Techniques Wiki, “How to Build a Memory Palace”, https://artofmemory.com/wiki/How_to_Build_a_Memory_Palace.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Point of Origin: On the influence of tracking your ideas on the pace of discovery”, The Insightful Scientist Blog, November 29, 2019, https://insightfulscientist.com/blog/2019/point-of-origin.

 

[Page feature photo: An aerial view of the maze at Glendurgan gardens, built in 1833, in Cornwall, United Kingdom.  Photo by Benjamin Elliott on Unsplash.]

The Seduction of “Eureka!”

The Seduction of “Eureka!”

Many of us believe we struggle because we can’t come up with ideas.

 

My new opinion is that having “breakthroughs” is not the reason why we struggle with scientific discovery.  Knowing what to do after you’ve had a breakthrough is where you’ll have challenges.

I have come across people who self-identify with one of two camps when it comes to “coming up with ideas”:

One camp believes it is “creative” and is good at coming up with ideas.

This creativity may be perceived as labor intensive (“I need a lot of time to think”), as idiosyncratic (“I only do my best work when I work after midnight while listening to songs from the musical “South Pacific” and writing while standing at the kitchen counter”), or as mystical (“Things just come to me when I dream, or they pop into my head in the shower”).

The other camp believes it is “not creative” and will not be able to come up with ideas.

This lack of creativity may be perceived as a biological trait (“I just wasn’t born with the creativity gene”), as practical (“I just stick to the facts and don’t let my imagination get carried away”), or as un-learnable (“I’ve just never gotten the hang of thinking up stuff”).

 

We all want to engineer Eureka! moments into our workflow.

 

“Coming up with ideas” is just another phrase for a “breakthrough”.  Or in the case of science, we call these ideas or breakthroughs “scientific hypotheses” (and when they are proved right they become “scientific discoveries”).

Most people I’ve met believe that what holds them back is the inability to engineer a breakthrough moment.  They think that scientific discovery eludes them because of their inability to come up with a good idea.  So they believe they struggle with generating magical Eureka! or Aha! moments, where things come together and new understanding suddenly appears.

In the pilot Insight Exchange event, where I brought together academics from different science fields and at different career stages to talk in small groups about what was holding them back from scientific discoveries in their own work, the most consistent piece of feedback I got afterward was that people wanted me to give them more strategies to engineer breakthroughs.

 

But we already have breakthroughs daily because we’re hard-wired to see meaning and patterns.

 

I recently learned about the work of neuroscientist Robert Burton on the cognitive and emotional basis for feelings of “certainty” (the belief that our understanding of something is accurate).  According to Burton, we are cognitively hard-wired to come up with ideas, i.e., breakthroughs.

More importantly, we are built to experience feel-good sensations when we believe we have achieved a breakthrough, i.e., when a spontaneous and unconscious understanding rises to consciousness.

That feel good sensation arrives in the form of dopamine, a chemical released in the brain that triggers the brain’s reward and pleasure centers.

There are a couple of important aspects to this finding.

First, being rewarded for achieving a feeling of certainty about our knowledge encourages us to do it again.  Like any pleasurable event, we seek to repeat or renew those pleasant feelings.

So Eureka! once, and you’ll want to Eureka! again and again.

As an aspiring discoverer, this probably all sounds pretty good.  It might appear like we are biologically designed to experience pleasure when we discover things, which would encourage us to discover more things.  It seems like a progress-promoting positive feedback loop, right?

Maybe.  But the seduction of Eureka! is a double-edged sword.

Why?  Because we experience the pleasant sensations and dopamine hit when we believe that we have understood something, even if our understanding is wrong, such as when it’s based on incomplete information.

Basically, we search for meaning and patterns and our brain rewards us when we find meaning and patterns, no matter what (you can read more on this in one of Burton’s articles published in Nautilus, which I’ve linked to below).

 

Unfortunately, our brain’s reward system doesn’t depend on whether we’ve got the right pattern or meaning.

 

Our internal reward centers are indiscriminate.  Come up with a wrong explanation that your brain at least perceives as a reasonable possible pattern and you can still feel the exact experience of an Aha! or Eureka! moment.  Even if you’re dead wrong.

A second important aspect is that we have intentionally evolved to recognize patterns and assign meaning to information we receive.

Burton uses the classic example of our ancestors recognizing lions (a pattern) and knowing what seeing a lion means to a very tasty looking pre-historic ancestor (the meaning).  We need to be able to put together growling, fur, four legs, claws, teeth, maybe a jungle or savannah plains, that the sun is high in the sky means feeding time, that lions eat smaller animals like us, etc. in order to be able to say “Aha!  I’d better run before I get eaten!”

We need to be able to combine many types of sensory information (visual, auditory, smell, tactile perceptions of temperature and time of day) and experiences (seeing lions eat other animals or even other people) together in order to be able to recognize one pattern (a hungry lion) and its meaning (I’m in danger).

What I am trying to drive home is the point that the two pieces that combine to make a breakthrough–pattern recognition and meaning-making–are processes each and every one of us engage in every second of every day.

We are creating hypotheses about how people interact with us, what world events mean for our lives and livelihoods, how the weather will affect our health and plans for the day, and what the ending to the TV show we are watching or book we are reading will be.

Many of the ideas that we have about these things will be right, but many of our ideas will be wrong.

It is the same process as scientific discovery—we acquire data, we search for patterns, we perceive patterns, and we make meaning from those patterns.

I don’t need to give you strategies to experience breakthroughs.  You’re doing it all the time.

But as Burton’s work highlights, the problem is that many of our breakthrough ideas are just wrong, even when we feel sure they must be right.

 

The real trick is to sift through all the wrong-headed Eureka’s to find the one Eureka! that’s actually accurate.

 

If I could go back and give my Insight Exchange participants a new take home message I would point out to them how many breakthrough ideas they had already had.  They had probably already thought up and dismissed ideas about new methodologies, new sources of funding, and reasons why certain pieces of data might fit together.  But they had also already discarded many of those ideas as too silly, too hard, to unlikely, to flaky, or to unfounded.

That they had discarded ideas was not the problem.

The problem was, when they dismissed those earlier ideas, they had also subconsciously and simultaneously dismissed their skill in thinking up new things.

It was this failure of self-awareness that was harmful to their forward progress.

Many of them had put themselves in the “I’m not creative” camp and so they had fixated on finding new ways to become capable of coming up with ideas.

They were focused on fixing an imaginary problem.

You have had many ideas, you are having ideas right now, and you will continue to have ideas.  That’s the take home idea I wish I’d given my pilot Insight Exchange group.

 

So, the discovery part comes in what you do with any ideas you have.

 

In Burton’s Nautilus piece, he hints at the fact that that we are more likely to latch on to false meaning and patterns (which, remember, our brain finds just as rewarding as accurate meaning and patterns) when we have limited or inconclusive data.

Hence, the activities and skills we need are not just how to evaluate ideas, but also how to evaluate and gather data when what we have is inconclusive or limited.

And the mindset we need is just to be aware that no matter how much information we have, we are always, on some level, operating in a world of limited and inconclusive data.

The above two sentences might sound familiar.  They are called the scientific method.

It is well-designed to help us react wisely to our internal hunger for Eureka! so that we can find the accurate, and not just the available, the explanation.

Formulating a cohesive understanding is still very much a work in progress for me and I do much of that thinking “out loud” here in the pages of The Scientist’s Log.

As Burton cautions, searching for certainty in our understanding can be a dangerous game of giving ourselves what we want, instead of giving ourselves the truth.

But Burton also proposes that the best remedy is to give up certainty in favor of “open-mindedness, mental flexibility and willingness to contemplate alternative ideas” (Scientific American, 2008).

Thus, it turns out that fighting for the alluring Eureka!, those lightbulb moments from cartoons, isn’t the struggle we discoverers have to overcome.  It’s the siren song of Eureka! and its pleasurable aftermath that we need to learn not to pursue at all costs.

The word “Eureka” derives from the Greek meaning for “I found it.”

The ideas we find lurking in our minds are sometimes new sources of illumination rising from the depths of the sea of knowledge.  But other times they are just jetsam and flotsam washed up on the beach of bad ideas.

The discoverer’s way is to learn to tell the good lightbulbs from the duds and to treat the pull of Eureka! like a pleasant pastime and not an alluring addiction.

 

Interesting Stuff Related to This Post

 

  1. Robert Burton, “Where Science and Story Meet”, Nautilus (April 22, 2013), http://nautil.us/issue/0/the-story-of-nautilus/where-science-and-story-meet.
  2. Robert Burton as interviewed by Jonah Lehrer, “The Certainty Bias: A Potentially Dangerous Mental Flaw”, Scientific American (October 9, 2008), https://www.scientificamerican.com/article/the-certainty-bias/.
  3. David Biello, “Fact or Fiction: Archimedes Coined the Term “Eureka!” in the Bath”, Scientific American (December 8, 2006), https://www.scientificamerican.com/article/fact-or-fiction-archimede/.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “The Seduction of ‘Eureka!’”, The Insightful Scientist Blog, November 15, 2019, https://insightfulscientist.com/blog/2019/the-seduction-of-eureka.

 

[Page feature photo: Unusual junk, a lightbulb, washed up on a beach in South Africa.  Photo by Glen Carrie on Unsplash.]

 

An Intangible Scheme

An Intangible Scheme

Why are categories so useful?  When we think about things, especially when we try to understand why things are the way they are, we often try to put things into categories.  We like to decide that certain elements fit categories A and B; we match certain processes to categories H and W; and then we conclude that the outcome turned out be a result in category Z.

This reliance on categories, on categorizing things or inventing new categories with which to label things, is something we do all the time.  I don’t have an answer as to why categories might actually be useful.  I don’t even have an answer as to why we believe categories are so useful.  But I do have some thoughts about why categories matter for scientific discovery.

 

Open Access Isn’t Universal Access

 

It all started when I was looking around for open access articles to read about scientific discovery.  I recently lost the paid subscription access I had to most journal articles.  So I had to switch over entirely to only open access articles (i.e., those that don’t live behind a paywall).  This led me to spend a week wandering around the internet looking for good quality free resources.

Finding free journal articles didn’t turn out to be the problem.  Finding relevant free journal articles did.

There is still no consistency to what peer-reviewed articles and pre-prints are freely available.  Sometimes an entire journal is open access.  Sometimes only articles the author paid to make open access are freely available.  And sometimes only articles the journal considers prestigious or very high impact are made freely available (as a form of advertising).  How much free access is the “right amount” of free access is an issue that both publishers and scientists continue to wrestle with.

So on one particular day, instead of looking for articles I needed and then checking to see if they were freely available, I spent a little time searching for free articles and then looking to see if they might be relevant.  I just needed to get a sense of what was out there.

It turned out to be time well spent because I came across some fascinating research in an area completely unknown to me: how to help kids with learning disabilities solve math word problems.

 

Learning Disabilities and Schemas

 

I’ve come across articles about how to improve student problem solving performance before.  I have worked in academia and, especially in physics, how to help students do better in class is a popular topic.

What was interesting about this research though is that I found it because I was actually looking up a definition of the word “schema.”

A schema, in psychology, is a way of mentally organizing and sorting information to help you make sense of the world based on previous experience.  We can have internal schemas about all sorts of things, like how to determine if you aced an interview, what’s appropriate behavior at a wedding, or why certain activities help you relax on vacation.

In early childhood mathematics education schemas have a more specialized meaning.  In that context, schemas refer to specific kinds of templates or recipes taught to children to allow them to solve word problems.

One of the earliest  and best known general math problem solving schemas was given by mathematician and educator George Pólya in his book How to Solve It (1945).  His math problem solving schema involves four steps: (1) clarify the problem, (2) create a plan to solve it, (3) execute the plan, and (4) check your solution.

In a 2011 article reviewing the literature on using schemas with children at risk of or with learning disabilities (both math and reading disabilities) author Sarah Powell talks about how specific (explicit and teacher-led) and lengthy (weeks to months) instruction on how to apply a schema to solve word problems can improve student performance.

I know you’re dying for me to get to the point and link this back to scientific discovery.  Here’s how that might work…

 

The Discovery is in the Transfer

 

Powell draws out of the research literature two key themes that were a lightbulb moment for my perspective on how to train yourself (or others) in scientific discovery skills.

The first key theme is that the schema training worked best when students were first asked to categorize the type of problem that needed to be solved.  For example, students were given word problems where they needed to add things together (“totaling” type problems), or subtract things (“comparison” type problems), or multiply things (such as “shopping list” type problems.  (We’re talking about 3rd and 4th graders in the U.S. education system, so just 8 to 10 year olds here.)

When students were just taught how to solve each type of problem using a schema, but not to identify problem types, they did well.  But when students were taught how to identify what type of problem they were dealing with and the corresponding schema, they did even better.  This was called the “schema-based instruction” approach.

The second theme Powell found in the literature is that this performance could be boosted even further if students were given explicit instruction on how to apply the schemas they already knew to novel problems.  By explicitly I mean that they were given specific guidance on how novel problems might differ from familiar problems and that students were also taught how to link novel problems to familiar problems and then apply the already known schemas for the familiar problems to these new problems.  This was called “schema-broadening instruction”, as in the students increased their breadth or broadened their ability to apply what they were already taught.

I think this is fascinating.  Do you see the echoes of working on a problem at discovery’s edge here?

Consider this:

As someone pursuing discovery you have almost undoubtedly been taught ways to solve known kinds of problems in your area of interest (or you may have taught yourself well-known methods by doing a lot of studying using the internet).  So, essentially, you are like students in the first theme — you have a set of schemas to solve certain kinds of problems.  These are problems with well-known answers that you already know can be solved.  And these are methods you already know work.

At discovery’s edge you now come to a problem that you don’t know how to solve (or you are trying to identify a previously unrecognized problem and point out that it needs solving).  You still have tried and true methods, but now you have no idea how to get those to work on your new problem.  Aspects of the new problem may or may not resemble your old problem.  And for a scientific discovery scale problem, you (or someone else) will have already tried all the known methods and shown they don’t work.

So you’re stuck in the second theme, the schema-broadening problem.  How do you get the methods you know to apply to a problem that’s new?

 

Schema is Just a Fancy Word for Category

 

I loved Powell’s article because I realized that a problem students with learning disabilities may have is the same one discoverers might face.  I mean that these two situations,  in spirit, are similar (and in no way mean to trivialize the nuances or differences between the two situations).

Once I realized this, it put into perspective why I myself had felt the need, when I first started working on how to improve the techniques of scientific discovery at the individual skill level, to jump in and start generating categories.  I had phases of scientific discovery (a way to put a process into categories); I was trying to compile strategies (categories for solving discovery obstacles); and I even spent a lot of time trying to find out what scholars were saying about the types of scientific discoveries (categories of discovery).

But then I second-guessed this approach, because I wasn’t quite sure why I thought it was so valuable.  Was it just habit?

In science, especially certain topics like geology, paleontology, and particle physics, we are prone to “reductionism,” the tendency to want to break everything down into the smallest parts and to assume the behavior of the whole can be precisely determined from knowledge of the parts.  But this is not true in many natural phenomena (known as “emergent” phenomena), where the behavior that results from the interactions of the smallest parts is highly sensitive to many factors, and cannot be reduced in this simple toy-model kind of way.  Nonetheless, reductionism tends to be a mental trap and blind spot to which many scientists fall prey (myself included).

But this idea of schemas, and our ability to call them up based on our mental association between a problem type and a particular schema, sort of summed up the implicit philosophy I was following:  If I could come up with types of problems related to achieving scientific discovery, and even types of scientific discoveries, then maybe I could identify a set of schemas to overcome those problems, and those schemas might be teachable.

In fact schemas are themselves just more categories, ways to put mental processes and beliefs into categories that we can use and implement at will.

Schema-broadening then is the crux of the problem as to why we don’t yet know how to “teach the skill of scientific discovery.”  We haven’t spent enough time thinking explicitly about why the schemas we have don’t apply to novel problems, or why we fail to recognize that a known schema can in fact solve a novel problem.  If we put more emphasis there, on studying how we transfer schemas from one problem to another, then maybe we can boost our ability to discover the undiscovered.

 

Building the Big Picture

 

The image that came to mind was of a vast and complicated mosaic.  This mosaic not only creates one large picture, but also contains within it many smaller pictures, set pieces within the larger world of the whole mosaic.  The information we gain through observation and experimentation are like the tiny tiles which need to be placed within the mosaic.  Our theories and hard earned insights are like the set pieces.  Nature herself is like the whole mosaic.  But schemas are like the unseen outlines that tell us where the tiles should be placed in order for the mosaic to be a reflection of the real world, instead of a fantastical mirage.

It’s that intangible scheme, lurking behind the finished whole, that deserves our attention as much as the finished mosaic itself.

 

Interesting Stuff Related to This Post

 

  1. Sarah R. Powell, “Solving Word Problems using Schemas: A Review of the Literature,” Learning Disabilities Research & Practice 26(2), pps. 94-108 (2011).  Open access version available here.
  2. George Polya, How to Solve It (1945).
  3. Liane Gabora, “Toward a Quantum Model of Humor”, Psychology Today online blog, Mindbloggling, April 6, 2017, https://www.psychologytoday.com/us/blog/mindbloggling/201704/toward-quantum-model-humor.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “An Intangible Scheme”, The Insightful Scientist Blog, August 25, 2019, https://insightfulscientist.com/blog/2019/an-intangible-scheme.

 

[Page feature photo: Mosaic commemorating the death of Beatles band member John Lennon in New York City’s Strawberry Fields, Central Park.  Photo by Jeremy Beck on Unsplash.]

The Ugly Truth

The Ugly Truth

I love a good mash-up, so let me ask you this: What do you get when you mash-up the ideas of two prolific female academics, one in social work and the other in theoretical physics?

The answer is: my musings for this week’s post, which boils down to the phrase “the ugly truth.”

 

A Tale of Two Academics

 

So which two academics am I talking about and which two ideas?  Here’s a quick rundown (I’ve included links to their webpages at the bottom of this post in case you want to follow-up):

__________________________

Brené Brown – Ph.D. in social work

 

Currently based at the University of Houston, Brown studies topics like the intersection between courage, vulnerability, and leadership.  She’s an academic researcher, a public speaker, and runs a non-profit that disseminates much of her work in the form of research-based tools and workshops.

__________________________

Sabine Hossenfelder – Ph.D. in physics

 

Currently based at the Frankfurt Institute for Advanced Studies, Hossenfelder studies topics like the foundations of physics and the intersection between philosophy, sociology, and science.  She’s an academic researcher, a public speaker, and writes pieces communicating science to the public as well as maintaining a blog well-known in physics circles.

__________________________

In the midst of simultaneously reading the most recent popular books published by these two researchers (Dare to Lead by Brown and Lost in Math by Hossenfelder), I was struck by a link between the two.  That link had to do with the premise of Hossenfelder’s book and one of the leadership skills Brown promotes in her book.

Both of these involve the word “beauty.”

 

Sabine Hossenfelder’s Lost in Math

 

Hossenfelder argues that physicists (in her case, especially taken to mean theoretical particle physicists and cosmologists) have been led astray by using the concept of “beauty” to guide theoretical decision-making as well as lobbying for which experiments to carry out to test those theories.  By “using” I mean that she illustrates through one-on-one interview snippets how theorists rely on beauty to help them make choices about what to pursue and what to pass by.  She also illustrates, through a review of the literature, how theoretical physicists have tried to define beauty both with words (like “simplicity” and “symmetry”) and with numbers (through concepts like “naturalness”, the belief that dimensionless numbers should be close to the value 1).

According to Hossenfelder, this beauty principle does not drive the theoretical effort among just a small few, but among the working many.  And she thinks it’s a problem.  Her main  reason for pointing the finger is the belief that this strategy has not yet produced any new successful theoretical results in the last few decades.

The best quote to sum up Hossenfelder’s book in my reading so far is this:

 

“The modern faith in beauty’s guidance [in physics] is, therefore, built on its use in the development of the standard model and general relativity; it is commonly rationalized as an experience value: they noticed it works, and it seems only prudent to continue using it.” (page 26)

 

Funny that Hossenfelder should mention values.  Values are something Brown talks about at length.

 

Brené Brown’s Dare to Lead

 

The crux of Brown’s book Dare to Lead is about acknowledging and leveraging qualities that make us human (vulnerability, empathy, values, courage) in a forthright, honest, and authentic way in order to become better leaders.  Brown illustrates her concepts with numerous organizational and individual leader case studies peppered throughout the book, as well as copious academic research from her team on this specific topic.

According to Brown, the prime cause of a lack of daring leadership is cautious leadership, best expressed through the metaphor of entering an arena fully clothed in heavy duty armor.  The energy put in to developing and carrying the armor takes away from the energy left to masterfully explore the arena.

Here, I’m most interested in her thoughts on values and the role they should play in daring leadership.

In case you’re wondering, Brown defines leadership as “anyone who takes responsibility for recognizing the potential in people and processes, and who has the courage to develop that potential.” (page 4)

(It’s the idea of developing potential, which resonates with scientific discovery, that caught my eye when I read the back cover of the book on a lay-over in Amsterdam.)

Brown traces much of our motives to our values: they drive our behavior and determine our comfort level when we take actions that either align with (causing us to feel purposeful or content) or run counter to (causing us to feel squeamish or guilty) our values.

The best quote to sum up Brown’s discussion of values is this one:

 

“More often than not, our values are what lead us to the arena door – we’re willing to do something uncomfortable and daring because of our beliefs.  And when we get in there and stumble or fall, we need our values to remind us why we went in, especially when we are facedown, covered in dust and sweat and blood.” ( page 186)

 

One last detail from Brown’s book will prime you for my mash-up:  On page 188 of her book, Brown gives a lengthy list of 100 plus items (derived in her research) from which to identify your core values.

The ninth word down on the list of values?  Beauty.

 

Beauty is Just Another Motive

 

So here’s where the mash-up begins.  And let me throw in one more element, just to make it fun.  Let me put this all in a metaphor, like something from a cheesy crime procedural TV show.  Ready to put two-and-two together and solve a mystery?

So, according to Hossenfelder a crime against physics has been committed (the failure to come up with something new in a timely fashion, after spending a lot of money trying to come up with something new).

Physicists have taken advantage of the means (applying beauty as a guiding principle) and the opportunity (being employed as physicists, exclusively at academic institutions in her examples) to commit this crime.

If you watch enough crime shows, you’ll know the overused phrase that TV detectives rely on.  Find the “means, motive, and opportunity” and you’ll find your criminal.

Hossenfelder has already singled out physicists as the perps.  But as a detective she would be at a loss for motive (other than maybe, “everybody else was doing it and I wanted to keep my job”).

Here, I imagine Brown chiming in as her spunky detective partner.  Hossenfelder has laid out her analytic,  but impersonal accounting, and now Brown swoops in to add the humane touch.  “No, no, Sabine,” Brown says.  “Beauty was not the means; it was the motive. The means was getting the research funding, the students, the equipment.  But the motive, well that’s just people being people: it was the pursuit of beauty they could call their own.

Okay, maybe melodrama and mash-ups don’t go together so great, but this is an interesting line of thought:

Brown’s work suggests that the pursuit of beauty as a methodological choice may not just be about expediency or experience, but also about personal fulfillment.  That’s deep stuff.  And if it’s true, then it throws the idea of changing tactics into a different category.

Then it means your changing the motive, not the means.  Beauty isn’t just about a guiding principle that might work, it’s about what you believe gives your work meaning when it does succeed.  And convincing someone to change their motive is a much taller order than convincing them to change their means.  Especially if their motives are values-driven (whether they realize it or not).

 

If You Can’t Be the Change You Want to See in the World then Bring the Change

 

Trying to constrain what motives are most likely to bring about scientific discovery seems to me like it might be a fool’s errand.

Odds are it’s about the right time, right place, and right motive, to put you in a position to recognize the undiscovered.  In Hossenfelder’s defense, I think she is unwilling to accept human motives (in an appendix she advises that you try to remove human bias completely) because she’s afraid it will undermine the ability to understand the truth (understanding and truth are numbers 109 and 108 on Brown’s values list).  But there’s more than one way to reach an outcome.  If our motives are driven by values and run deep, then instead of asking scientists to change motives, we could also just bring in more people with different motives and give them a seat at the table.  In that way you bring these alternatives by bringing in people who value those approaches and use them by default.

And Brown’s values list includes a lot of words that easily might be interesting alternative motives (or guiding model-building principles), like adaptability, balance, curiosity, efficiency, harmony, independence, knowledge, learning, legacy, nature, order, and simplicity (just to name a few).

In the spirit of a seat at the table of debate, Hossenfelder’s book offers a counter-value to the beauty principle in model-building (understanding and truth).  And Brown’s book offers a counter-value to the stoicism principle in leadership (courage and vulnerability, # 24 and # 113 on her values list).  These two researchers bring their own motives and values, serving as the bearers of not only alternative perspectives, but more importantly alternative actions that might help make progress.

[In case you’re wondering, my three core values, in priority order, are hope (# 52 on the list), respect (# 87), and affection (not on Brown’s list).  That may help clarify my motives for everything on The Insightful Scientist website.]

 

The Ugly Truth

 

You might wonder why I suggest giving more people with different values a seat at the table, your discussion round table.

Why not just try one set of values and if it doesn’t work then replace them with a new set of people with different values?  Or why not just try and change your values until you achieve success?

The tricky thing about values is that it’s hard to change them once their set, usually sometime in middle childhood.  The useful thing about values is that it’s also hard to put yourself in someone else’s shoes.  That lack of imagination, empathy, and sympathy usually turns into skepticism.  And skepticism done right can be tremendously helpful to science, especially when it comes to verifying possible discoveries.

If we can’t understand, or don’t agree with, someone else’s motives then we automatically want and need more data and evidence to agree with their conclusions.  We set the bar of proof higher when it’s an ugly truth (to us) than when it’s a beautiful explanation (to us).

For example, suppose one scientist believes that embracing complexity captures the wonder of nature by valuing diversity, while another scientist believes that simplicity captures the wonder of nature by valuing connection.  We may find that while one of these people thinks needing many models for specific cases has a greater feel of “truthiness” the other person believes that having as few models as possible means you’re on the right track.  The gap between these two approaches must be bridged because at the end of the day scientific discovery is about consensus converging to a base set of truths through observation and evidence.  Filling the gaps between scientific findings and their associated motives ensures that science has a more solid foundation.  And, in our example, we may find that while at one time resources make simplicity the better strategy, at another time complexity may be just the thing for a breakthrough.

Conflicting values and the guiding principles they generate in scientific work are like unfamiliar or misshapen vegetables usually hidden from view.  It’s going to take more convincing to put money into it by buying it, to invest effort in it by cooking it, and to be willing to internalize it by swallowing it.  You’d maybe rather just ignore it or toss it in the garbage.  But you never know, one person’s ugly truth may turn out to be another person’s satisfying ending.  If we don’t all sit down and share a meal together, how will we find out?

 

Interesting Stuff Related to This Post

 

  1. Website – Brené Brown’s homepage
  2. Website – Sabine Hossenfelder’s blog Backreaction
  3. Elisabeth Braw, “Misshapen fruit and vegetables: what is the business case?”, The Guardian (online), September 3, 2013, https://www.theguardian.com/sustainable-business/misshapen-fruit-vegetables-business-case.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “The Ugly Truth”, The Insightful Scientist Blog, August 9, 2019, https://insightfulscientist.com/blog/2019/the-ugly-truth.

 

[Page feature photo:  A pretty, pert bunch of Laotian purple-striped eggplants, roughly the size of ping pong balls. Photo by Peter Hershey on Unsplash.]

Don’t Curate the Data

Don’t Curate the Data

It’s tempting when we talk to others about our ideas to only want to share the good stuff.  To only share the things we think are logical, sound reasonable, maybe only the things we think (or hope) will make us seem smart and focused.  But this tendency to re-frame our real experiences and distill them into nice little stories we can tell people over coffee or a beer can be a dangerous setback to getting better at a new skill set.

 

Trying Too Hard to Look Good

 

Why?  Because sometimes we are so busy trying to think about how to tell (or should I say sell) others on what we’re doing or thinking that we scrub our memories clean of the actual messy chain of events that led us to come up with the polished version.  That messy chain, and every twist, turn, and chink in its construction, is the raw knowledge from which we can learn about how we, or others, actually accomplish things.  I’ll call it “the data.”

So this fear of how others will perceive our process is one thing that gets in the way of having good data about our process.  We start to curate the data to make ourselves more acceptable to others.

But we need this data to gain a meaningful awareness of what we actually do to produce a certain outcome.  This is even more important when we try to figure out how to reproduce a mental outcome.

Maybe you came up with a winning idea once, but now you’re not sure how to get the magic back.  Or maybe you want to pass your strategy on to a younger colleague or friend, but don’t really know what you did.  Maybe you’re hoping to learn from someone else who succeed at thinking up a breakthrough solution, but they say “I really don’t remember what I did.  It just sort of came together.”

Which brings us to a second thing that works against having access to good data about our own interior processes and patterns.  Memory.

 

Mining Memory is a Tricky Business

 

We all know we don’t have good memories, even when we are trying hard (studying for tests in school, or trying to remember the name of every person in a group of ten new people you just met are classic examples).  Memory is imperfect (we have weird, uncontrollable gaps in what we retain).  Memory is selective (we have a tendency to be really good at remembering what happened during highly emotional events, but not during more mundane or routine moments).  Memory is pliable (the more we tell and retell a version of something that happened to us, the more likely we are to lose the actual memory in place of our story version).

These tricks of memory not only frustrate us when we try to observe and learn from ourselves, but also when we try to learn from others.

There have been lots of interviews with famous scientists who made discoveries asking them about how they did it.  But their self-reported stories are notoriously unreliable or have big gaps because they, like us, are subject to the fickle whims of memory and the hazards of trying to tell your own biography one too many times.  Mining memory for useful insights is a tricky business.

So memory and lack of awareness (or mindlessness) cause us to lose access to the precious data we need to be able to see our behaviors and patterns from a larger perspective in order to learn from them and share them.

When I first started learning about scientific discovery, recognizing these pitfalls of bad memory and mindlessness caused me a lot of annoyance.  I would think of a great example of a scientific discovery, such as a discovery that shared similarities with an area or question I wanted to make discoveries in.  I’d think, “Perfect!  I’ll go read up on how they did it, how they discovered it.  What were they reading, what were they doing, who were they talking to?”  But of course, answers to those questions wouldn’t exist!

Maybe the discovery was of limited interest so nobody bothered to ask those questions and now the discoverer had passed away.  Or maybe the discovery was huge and world changing but the histories told about it tended to re-hash the same packaged myths—like Newton and the apple falling inspiring ideas about gravity, or Einstein taking apart watches from an early age leading to picturing little clocks when working out the effects on time of traveling near light speed in special relativity.  Part fact, part fiction, these stories leave hundreds of hours of more mundane moments, links in the mental chain, unilluminated.  Good data that could guide future generations gets lost, sacrificed on the altar of telling a whimsical story.

So when I sat down in September of 2018 to start trying to work out a more modern definition of scientific discovery—something pragmatic that you could use to figure out what to do during all those mundane moments—I kept thinking about how to better capture that process of obtaining insights, as you go.

That’s when I realized we already have the methods the problem is we always want to curate the story told after the fact.  And rather than curating the data that make it into the story (i.e., creating an executive summary and redacting some things), we end up actually curating the source data itself (i.e., never gathering the evidence in the first place).  In other words, rather than just leaving out parts of the story, we actually tune out to parts of the story as we are living it, so that we literally lose the memory of what happened all together.

But that story is the raw data that fields like metascience and the “science of science” need to help figure out how scientists can do what they do, only better.  And as scientists we should always be the expert on our own individual scientific processes.  The best way to do that is to start capturing the data about how you actually move through the research process, especially during conceptual and thinking phases.  Capture the data, don’t curate the data.

 

A Series of Events

 

Let me give you a real life example to illustrate.  As I said, I sat down to try to come up with a new definition of scientific discovery.  I’m a physicist by training.  Defining concepts is more a philosopher’s job, so at first I had a hard time taking myself and any ideas I had seriously.  I got nowhere for three months; no new ideas other than what I had already read. Then one day a series of events started that went like this:

I read a philosophy paper defining scientific discovery that made me very unhappy.  It was so different than my expectation of what a good and useful definition would be that I was grumpy.  I got frustrated and set the whole thing aside.  I questioned why I was studying the topic at all.  Maybe I should stick to my calling and passion, physics.  I read when I’m grumpy, in order to get happy.  So I searched Amazon.  I came across a book by Cal Newport called So Good They Can’t Ignore You.  It argued that passion is a bad reason to pursue a career path, which made me even grumpier; so grumpy I had to buy the book in order to be able to read it and prove to myself just how rightfully disgruntled I was with the premise.

Newport stresses the idea of “craftsmanship” throughout his book.  I was (and still am) annoyed by the book’s premise and not sold on its arguments, but “craftsmanship” is a pretty word.  That resonated with me.  I wanted to feel a sense of craftsmanship about the definition of scientific discovery I was creating and about the act of scientific discovery itself.

I didn’t want to read anymore after Newport.  So I switched to watching Netflix.  By random chance I had watched a Marie Kondo tidying reality series on Netflix.  Soon after, Netflix’s algorithm popped up a suggestion for another reality series called “Abstract: The Art of Design.”  It was a series of episodes with designers in different fields, like architects, Nike shoe designers, theater and popstar stage shows set designers, etc.  It pitched the series as a behind the scenes look at how masters plied their craft.  Aha, craftsmanship again!  What coincidence.  I was all over it (this was binge watching for research, not boredom, I told myself).  I was particularly captivated by one episode about a German graphic designer, Christoph Niemann, who played with Legos, and whose work has graced the cover of The New Yorker more than almost any other artist.  The episode mentioned a documentary called “Jiro Dreams of Sushi.”

Stick with me.  Do you see where this is going yet?  Good, neither did I at the time.

So I hopped over to Amazon Prime Video to rent “Jiro Dreams of Sushi” about a Japanese Michelin rated chef and his lifelong obsessive, perfectionist, work ethic regarding the craft of sushi.  At one point the documentary showed a clip of Jiro being named for his Michelin star and they mentioned what the stars represent: quality, consistency, and originality.  Lightbulb moment!  Something about the ring of three words that summed up a seemingly undefinable craft (the art of creating delicious food) felt like exactly the template I needed to define the seemingly undefinable art of creating new knowledge about the natural world.

So I started trying to come up with three words that summed up “scientific discovery”.  Words that a craftsman could use to focus on elements and techniques designed to improve their discovery craft ability.  There were more seemingly mundane and off-tangent moments over a few more months before I came up with the core three keywords that are the basis of the definition I am writing up in a paper now.

The definition is highly unique, with each term getting its own clear sub-definition that helps lay out a way to critically examine a piece of research and evaluate it for its “discovery-ness”, i.e., its discovery potential or significance.  It’s also possible to quantify the definition in order to try and rank research ideas relative to one another for their discovery level (minor to major discovery).

It’s a lot better idea than some of the lame generic phrases that I came up with in the early days, like “scientific discovery is solving an unrecognized problem ” (*groan*).

On an unrelated track at that time, I was reading Susan Hubbuch’s book, Writing Research Papers Across the Curriculum, and had come across her idea that you create a good written thesis statement by writing out the statement in one sentence and then defining each keyword in your statement using the prompt “By <keyword> I mean…”.  So then I took the three keywords I had come up with and started drafting (dare I say crafting?) their definitions in order to clarify my new conception of “what is scientific discovery?”

So that’s the flow…my chain of discovery data:

Reading an academic paper led to disgust; disgust led to impulse spending; impulse spending brought in a book that planted the idea of craftsmanship; craftsmanship led to binge-watching; binge-watching led to hearing a nice definition of something unrelated; the nice definition inspired a template for how to define things; and simultaneously reading a textbook suggested how to tweak the template to get a unique working definition down on paper.

How do I know all this?  I wrote it down!  On scraps of paper, on sticky notes, in spiral notebooks, in Moleskines, in Google Keep lists, Evernote notes, and One Note notes (I was going through an indecisive phase about what capture methods to use for ideas).

I learned to not just write down random thoughts, but also to jot down what inspired the thought, i.e., what was I doing at the moment the thought struck—reading something, watching something, eating something, sitting somewhere, half-heartedly listening to someone over the phone…(Sorry, Mom!)?  Those are realistic data points about my own insight process that I can use later to learn better ways to trigger ideas. (And, no, my new strategy is not just to watch more Netflix.)

 

Make a Much Grander Palace of Knowledge

 

Instead of trying to leave those messy, mundane, and seemingly random instigators out, I made them part of my research documentation and noted them the way a chemist would note concentrations and temperatures, a physicist energies and momenta, a sociologist ages and regions.

And then I promised myself I wouldn’t curate the data.  I wouldn’t judge whether or not impulse book buying is a great way to get back on track with a research idea, or whether or not Marie Kondo greeting people’s homes with a cute little ritual is a logical method of arriving at a template to devise operational definitions.  I wouldn’t drop those moments from memory, or my records of the research, in order to try and polish the story of how the research happened.  I’ll just note it all down.  Keep it to review.  And maybe share it with others (mission accomplished).

Don’t curate the data, just capture the data.   Curation is best left to analysis, interpretation, and drawing conclusions, which require us to make choices—to highlight some data and ignore other data, to create links between some data and break connections among other data.  But think how much richer the world will be if we stop trying to just tell stories with the data we take and start sharing stories about how the data came to be.  The museum of knowledge will become a much grander palace.  And we might better appreciate the reality of what it is like to whole-heartedly live life as a discoverer.

 

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Don’t Curate the Data”, The Insightful Scientist Blog, August 2, 2019, https://insightfulscientist.com/blog/2019/dont-curate-the-data.

 

 

[Page Feature Photo: The gold dome in the Real Alcazar, the oldest used palace in Europe, located in Seville, Spain. Photo by Akshay Nanavati on Unsplash.]

Good Things Come in Threes

Good Things Come in Threes

Have you ever watched a movie or TV show, or read a book, where at the end of the story the main character saves the day by doing something unbelievable?   By unbelievable I mean that they do something completely out of character.  This kind of ending can leave a bad taste in your mouth, as if the writers didn’t do their job in making us believe the character had changed enough to become a person who could behave that way by the end.

When I was working on my degree in creative writing, there was a phrase that summed up the problem:

“Once is an accident, twice is a coincidence, three times is a pattern.”

The idea is that people open up to the possibility that something is plausible by seeing relevant elements happen enough times that we decide a pattern is believable.  It’s kind of a “conception through perception” game.  If a character behaves in ways that build up to the ending then we consider the ending reasonable.  But if we don’t see enough evidence then we find it hard to believe and the ending will seem like a cheap magic trick and a waste of our time (and money).

In my own experience, I’ve found it pays to be aware that this little rule of three affects not only how writers convince us of story endings, but also how we convince ourselves that some of our ideas merit pursuit.

That’s because deciding if a research idea is worth investigating is really about deciding if there’s enough of a pattern there to plausibly lead to an interesting ending…and hopefully that ending will be a scientific discovery.

So let’s talk about how to translate this magic of the number three from creative writing into research in a way that will help us decide if a research idea should move to the top of our to-do list or get shuffled to the back burner.

 

THREE…Essential Elements of an Idea

 

Most of our time as scientists is spent in the “articulation” and “evaluation” phases of scientific discovery.  Meaning, we worry a lot about defining our ideas and assessing if they are useful, correct, and/or meaningful.

In starting on a research topic, it can be hard to formulate a clear awareness of what we mean by new ideas.  And once we’ve jotted something down on paper, or typed it up, it can be difficult to decide if the idea seems worth focusing on.  The tendency is to have conversations in your head about it and then put it on the mental back burner because of the feelings of “riskiness” that working on discovery-level science can bring up.

If you’re stuck with a sense that you “have an idea”, but that you couldn’t yet share that idea with someone in a three-minute sound bite then here’s something to try.  You can write this down, type it up, do a voice memo, or some combo of all three.  Whatever works for you.  I’ll use pen-and-paper writing as my example since that’s how I prefer to work:

 

  1. Write down the idea you are trying to get clear in your head as a one word prompt. Stick to one word, no phrases or sentences.
  2. Spend a few minutes (no more than 15) just thinking about the idea behind your one-word prompt. Now, write down three more essential words that capture the heart of the idea.  These new words should sum up the essential elements, features, behaviors, or requirements of your prompt word.  Again stick to just three words, no phrases or sentences here either.  But you must write down at least three words, no less.
  3. Now create a list numbered one to three. For each number write down what you mean by each of the essential words.  You can write in phrases or sentences here.  But keep it to no more than 1-2 sentences per numbered item.  Start each numbered item with the prompt “By <essential word> I mean…”  You can spend up to one whole day to complete this list.  But finish this whole exercise (steps 1-3) in 24 hours or less.

 

This little exercise can help you generate a clearer picture of your idea by forcing you to pick and choose what matters most to you and define it.

That’s where you as a scientist bring your best asset, your personal diversity, to the playing field.  Don’t use other people’s words or definitions for this exercise.  Set your phone aside.  Don’t use Google.  Don’t use textbooks or published papers.  Just use what you’ve already got inside your head.

I cap the time you spend on it at 24 hours to keep you from overthinking it.  The goal here is to make a rapid decision—“research this” or “shelve this.”  You want to build momentum, not stall out in the graveyard of analysis paralysis.

The reason I say identify three essential words goes back to the accident-coincidence-pattern idea.  Three words is a good sweet spot to help make abstract ideas more concrete.  Think of it like triangulating a signal: getting three points of reference lets you narrow down and enclose your idea in a more well-defined area.

 

THREE…Sources of Information

 

At this point it’s helpful to get out of your own head and take a look at what other people are saying about your idea.  In theory, you probably started out by reading the work of others or listening to someone speak, which helped spark the idea you are working through now.  So you may already have some good sources to look over again.

The goal is to get three sources (by “source” I mean a written or spoken piece of work) you can compare against the idea you formulated in the previous exercise.  You want to read them (or re-read them) and compare how you formulated your idea to how the author(s) or speaker(s) formulated it.

The most important thing is to find good quality sources to help evaluate your idea.

If you don’t know how to find or consider sources for their quality, here are some tips:

  • Look for good quality information, not good quality authors. That means you want sources that are complete, accurate and have minimal bias (or consciously acknowledged bias).  Authors, writers, scientists, journalists, etc. are only human.  No one produces good quality work all the time.  Evaluate each information source individually; don’t just assume that famous names, or even people you know who usually do good work, put in that effort this time.  We all have off days.
  • Value sources that speak most directly to the idea you are working through with real data and more references to explore. Be open to traditional (peer-reviewed published articles, monographs, academic books, etc.) and nontraditional (blogs, popular science outlets, podcasts, etc.) sources.  Evaluate each source individually.  I usually rank items with real data (even if it’s just a thoroughly explained personal example) and that reference other good quality sources I can freely access (no paywalls) more highly than ones that are tangential to my topic or only talk in general terms.
  • Try to get a good variety in your three sources. Make sure they are all by different authors or speakers.  Try to get different perspectives in each one, i.e., the authors are from different fields, different career stages, different job sectors, are different genders, ethnicities, ages, nationalities, etc.  The sources don’t need to tick all these boxes, but do the best you can.  Try to ensure that you don’t rely too heavily on just one voice in the debate, which could cause you to repeat what’s already been done instead of trying something new.

 

Again, don’t over think this.  I’d limit the time you spend on this to one week.  Do the best you can with the information you have access to.

Once you’ve got these sources, spend some time reading them and noting the differences between how you articulated the idea and how they articulated the idea.  You’re looking for similarities, differences, things they mention that you left out completely, and things you mention that they ignore (this last one is where scientific discovery lives).

 

THREE…Mental Examples

 

Now it’s time to move out of the “rainbows and butterflies” world and into the “bricks and mortar” world.

What I mean by this is that in the beginning we tend to be pretty excited, enthusiastic, and confident about our own ideas when they’ve only existed in our head.  This is the “rainbows and butterflies” world.  These feelings are a good way to generate momentum to get started on a project and they encourage “thinking.”   But they’re not very helpful to encourage “doing.”  Doing requires having a clear idea of what the next action is.  That’s the “bricks and mortar” part.  Rainbows and butterflies are inspiring, they captivate and focus our mental attention, but they are hard to hold in your two hands.  With bricks and mortar it’s much easier to grasp how to start building something.

Applying your idea to examples is a way to get started on the bricks and mortar “doing” and to see if you’ve missed out on any major facets of defining your idea so that it’s open to scientific investigation.  I like my three examples to cover three types (three is still the magic number!):

  1. An example that fits your idea really well (an “exemplar”).
  2. An example that doesn’t fit your idea at all (a “counter-example”).
  3. An example where it’s hard to tell if it fits your idea or not (a “neutral example”).

 

Covering these three bases will encourage you to be deliberate and thoughtful and to assess your idea for its strengths (illustrated by the exemplar) its weaknesses (illustrated by the counter-example) and its limits and areas for improvement (illustrated by the neutral example).

You want to develop a more realistic understanding of what your idea is (you could tell someone about the exemplar in conversation as a way to help describe your idea) and to acknowledge its limits and shortcomings.

If the limits make the idea not useful, or the shortcomings show up for examples that are what you were trying to explain, then I find it’s best to go back and trying redefining my idea.  Try changing up the essential words or changing their definitions until you have an idea that holds up better to this simple evaluation method.

 

THREE…Drafts

 

Now you’re ready to put your idea into a working definition that you can make a decision on.

I know, I know: all of that work just to get to what most people consider the starting point for research!

That’s why the tagline for The Insightful Scientist is “Discovery awaits the mind that pursues it.”  Mental preparation and technique are a huge part of being a scientist and trying to make scientific discoveries.  Learning processes and strategies to wield our mindset more effectively is one of the best ways to run a winning race in pursuit of discovery.

The point of all this mental preparation is to give yourself a clear picture of where your idea stands and the challenges and advantages to trying to investigate it.  That is what gives you the ability to decide if it should move to the top of your to-do list or move to your mental back burner.

This last step ensures that you have something concrete to either (1) return to later if the idea doesn’t make the to-do list for now, or (2) act on right away if it does make your to-do list.

So set aside a day or two for this and type or write (no voice memos here) a formulation of your idea that is in complete sentences and includes both your prompt words, the essential words you identified, and their definitions.  Keep the entire working definition to a minimum of one sentence and a maximum of 5 sentences (i.e., a paragraph).  If you prefer word count goals, try for something in the 100 to 250 word range.

Write three drafts of your working definition:

  • First write a “rough draft” that just gets all the basic elements of your working definition (one word prompt, three essential words, definitions of those essential words) in there in grammatically correct language with proper spelling.
  • Then write a “second draft” that most likely changes some core features of the definition, like the essential words or their meanings, or adds on to clarify exactly what you mean.
  • Then write a “third draft” that tries to cut down on unnecessary words, overly complicated phrases, or overly technical words. Just include the essential in your definition, not the useful or the interesting.

 

Once you’ve got your third draft of your working definition it’s up to you to chart your own course and make a decision: are you going to research this idea or not?  With all that mental preparation you’re in a much better spot to make a more thoughtful decision and you could explain that decision to someone else.  Game. Set. Match.

 

Good Things Come in Threes

 

So that’s how I translated the idea of “Once is an accident, twice a coincidence, and three times a pattern” into a way of gathering information to decide what scientific ideas to pursue right now.  In fact, I just used it last week to finally decide that one of the many working definitions of “scientific discovery” I have come up with over the last 8 months is worth putting into a paper to submit to the open access philosophy journal Ergo by later this year.

It’s important to point out that this general rule of three is not (necessarily) sufficient for a scientific investigation to be rigorous.  That depends on the method being used.  This rule of three is more about how to decide if fledgling ideas or flashes of insight from brainstorms are worthy of becoming methodical scientific studies.  But as a general mental rule, especially if you’re feeling trepidatious, giving yourself a set of three (sources, examples, key words, ideas, sounding boards, etc.) can be an effective way to help you decide what makes the cut.

There’s another saying that also relies on the number three:  “Good things come in threes.”  In science accidents spark awareness, coincidences spark curiosity, and patterns spark discoveries.

So maybe there is power and magic to the number three.

Of course there’s only one way to find out if my anecdotal use of the number three will lead you to your own epic story of discovery: take a chance, roll the dice, and jump in with an open mind to try it out.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Good Things Come in Threes”, The Insightful Scientist Blog, July 26,2019, https://insightfulscientist.com/blog/2019/good-things-come-in-threes.

 

[Page Feature Photo: Close up image of red dice. Photo by Mike Szczepanski on Unsplash.]

Be a Person of Many Hats

Be a Person of Many Hats

When someone asks you what you do for a living, how do you answer?

Do you give your job title?  Do you say what kinds of project(s) you are working on?  Do you give your company name or name of the topic you work on?

From researchers of all stripes, working in non-profits, volunteer and hobby groups, schools, universities, industry, and government, you hear many answers.  But when scientists get together I’ve noticed people tend to label themselves as one of four “flavors” of scientist: as an experimentalist, a theorist, a computationalist, or a citizen scientist (sometimes called a “hobbyist” or “amateur scientist”).

Often times, scientists will use these labels when they get nervous about having to answer questions.  If you listen to or watch videos of a lot of science talks for scientists, you might have noticed this too.

I’ll give you a few examples from physics:

If someone is asked an intensely mathematical question they might say “I’m just an experimentalist, so that’s above my paygrade.”  If someone is asked to defend the possibility of building a real prototype they might say, “Oh I’m just a theorist, so I don’t know about building things, I can just tell you the physics is there.”  If an audience member asks a question that gets a dismissive response from a speaker, they might say “I was just curious.  I follow the topic as a hobby, but I don’t really keep up with the details.”

Lately, as I’ve started studying connections between researching fundamental physics and the science of scientific discovery, I’ve been asked many times, “What would you call yourself?”, “How should I introduce you to people?”, or “What would you say you do?”

Which got me thinking about how we see ourselves as scientists.  And I’ve started to wonder if using labels as personal identities might be hurting our attempts to actually discover things.

 

Finding the Third Way

 

So, “experimentalist”, “theorist”, “computationalist”, and “citizen scientist”.  First off, I should define what I mean by these words:

“Experimentalists” conduct laboratory experiments to gather new data and generate equations to describe data they’ve collected.

“Theorists” look through old, new, and especially anomalous data to invent new descriptions and equations to explain the misunderstood and to predict the unobserved.

“Computationalists” run large-scale precision calculations on computers to simulate meaningful phenomena and generate equations to capture the real world in a form they can put on computer.

“Citizen scientists” conduct projects to satisfy their curiosity and support their community and generate equations for joyful distraction or to improve the quality of life of a group they care about.

I think these labels apply to any scientific field—agriculture, psychology, geology, chemistry, physics, computer science, engineering, economics, you name it.  And I emphasize equations because I think that’s what distinguishes the fine arts (literature, music, art, dance, etc.) from the sciences.  The sciences try to represent Nature using numbers, language, and symbolic math, while the fine arts try to represent Nature using sound, light, movement, color, texture, and shape.

Like I said in the opening of this post, I certainly see people use these words to navigate tricky audience questions.  But I also think they get used in two other ways, depending on what kinds of scientific discoveries people are pursuing: longstanding problems in mature fields, or unrecognized opportunities in emerging fields.

 

Work Identity

 

In mature fields, the kinds with lots of funding and famous teams that people can name off the top of their head, I think three of these four labels (experimentalist, theorist, computationalist) are used by scientists and that they mean them as a sort of personal identity.  That’s because mature fields tend to have larger networks of people working in them.  With larger networks comes more specialization (to help manage the large volume of people and ideas).  People get assigned to roles and they develop expertise in that particular role over the course of their work career.

In mature fields even training tends to start labeling people early.  For example, at my current institution undergraduates in their first year are already assigned to “Physics Theory” track (which requires fewer lab hours and more math) versus “Physics” track (which requires more lab hours and less math).  And in the United States at the Ph.D. level students are divided into either experimental or theoretical tracks.  Computational folks usually fall into one or the other track as a sub-category, depending on whether or not they mainly work on simulations for large experimental collaborations, or simulations for a small (maybe five people or less) theoretical group.

Meanwhile, the pursuit of scientific discovery in mature fields tends to take the form of trying to answer longstanding open questions.  The kind that make headlines in popular science journals.  In physics these are things like the nature of the early universe or why the universe has more matter than antimatter.

When individual scientists choose to see labels like experimentalist, theorist, or computationalist as work identities, they engage with discovery in more limited ways.  They do so only to the extent that the field at large has decided they should have a role in it.

So, for example, if anomalous data is generated by an experimental group, but the field decides that it’s most likely an experimental error causing the blip, then computationalists and theorists will be discouraged from contributing to the discussion, or will suffer a hit to their credibility if they join the debate.

 

Stay in your lane.

 

Work identities are kind of like a rule that says, “Stay in your lane.”  But if the key finding is to be found by taking an off-ramp, then progress will be slow or non-existent because there’s not enough freedom of intellectual movement.

Also, I mentioned at the beginning that only three of the four labels appear in mature fields.  There’s rarely any place given to the voices of citizen scientists or hobbyists at all.

 

Work Ethic

 

On the flip side, there are emerging fields and topics.  These areas are so new that very few people are actually studying them, no rules have been established yet, and even the kinds of discoveries being pursued are hard to define.  Emerging fields are uncharted territory so anything is possible.

With so few people working on them, emerging topics don’t need hierarchies, they just need bodies willing to do the work.

So an experimentalist will be someone who values running a huge amount of tireless trial and error.  A theorist is someone who values digging around to think up reasons, and ideas, and questions.  In emerging fields you are more likely to be dismissed by co-workers until the value of the project proves itself and gains more acceptance in the mainstream. So taking on a hobbyist work ethic becomes more important as you have to value things like “passion” and “obsession” to keep people motivated through the tough times.  And a computationalist is someone who values grinding through data on computer until all those numbers start to look like a pattern.

 

Mindset over matter

 

So in science, I think that means the labels we usually think of as identities in mature fields become a kind of work ethic in emerging fields; a style of taking on each and every task to bootstrap your way to a successful breakthrough.  They are not so much you, as they are the mindset you approach them with.

This mindset over matter approach is what allows researchers in emerging fields to pursue high-risk opportunities that may lead to scientific discoveries, or may prove to be dead ends.

But this still puts the brakes on the speed with which discoveries could be made, because I think researchers still feel like they have to find people who either innately have that mindset, were raised with that mindset, or have acquired that mindset by experience or training.

In other words, in both mature and emerging fields these labels are seen as compartmentalized rather than fused—you can own one, but not the others.

 

Troubleshooting Approach

 

That brings me back to the cryptic header I started this post with, “Finding the Third Way”.  I think of this as “finding the middle way”.  To me that means using these labels as skillsets and thinking of the whole pursuit of scientific discovery as a troubleshooting exercise.

The trouble might be that you’re bored and you want something interesting to do with your weekends, so you’re going to volunteer as a citizen scientist to contribute to research on soil health in your local area…just because you love veggies.

Or the trouble might be that you’re tired of having patients die on your watch from a preventable condition, so you’re going to raise money to run experiments on cheap lifestyle interventions to reduce the number of deaths.

Or the trouble might be that you think nuclear weapons are dangerous, but there’s all this plutonium sitting around in stockpiles with no safe, permanent way to get rid of it, so you’re going to dig into all the theories on how to dispose of anything that might give you a breakthrough idea to help solve the problem.

My point is that we solve problems that matter to us.  Personal problems, social problems, global problems.  But the problems are what matter most, not the fields.  Scientific discoveries are often made because their discoverers saw a problem that they couldn’t let go of and so they worked until they found a way to solve it.

These aren’t abstract, philosophical things.  They are practical, specific challenges that we tackle one troubleshooting step at a time.  And over the course of solving that problem, every one on of the roles I’ve mentioned will probably come into play.

So instead of always looking, or waiting, or hoping that we can involve someone willing to take on “the experimentalist”, or “the theorist”, or “the computationalist”, or “the citizen scientist” responsibilities, we should consider building up a reserve of each of those things within ourselves.

 

Moving Beyond Our Training

 

If we want to give ourselves the best chance of solving a problem that matters to us and discovering something along the way, then maybe we shouldn’t be just one of those things (experimentalist, theorist, computationalist, hobbyist) in our lifetime.

Maybe we should be all of those things at one time or another.

They’re just skills.  Not destiny.

Like the logo for my website says, “Discovery awaits the mind that pursues it.”  And I chose “The Insightful Scientist” as my websites name for reason.  Because I wanted to remind myself every day that I pull open the homepage that science is about discovery and that science is bigger than just physics or just the ways I was trained to pursue discovery as a theoretical physicist.

We use our training as far as it will take us. But if the science is bigger than our training then we don’t give up, or say it’s a job for someone else.  We just stretch our minds a little wider open, learn a new skill, and jump once more into the fray.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Be a person of many hats.

So when people ask me what I am or what I do I think I’ll start saying:

“I’m a Bernadette.”

“And it just so happens that the problem I’m trying to solve right now is how to put the science of scientific discovery into practice in neutrino particle physics.”

I won’t label myself as a theorist, or a neutrino physicist, or an academic.  Because the titles don’t matter.  The problems we’re trying to solve do.

There’s an English expression that says taking on different roles at work is like wearing different hats.  Well, I’m willing to wear whatever hat gets the problem solved, even if I don’t look good in fedoras.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I narrowed down the labels we use for scientists to four: experimentalist, theorist, computationalist, and citizen scientist.
  • I classified scientific discovery into two types: trying to answer longstanding questions in old fields and recognizing new opportunities in young fields.
  • I argued that we use the four labels as identities or work ethics; but that a more agile approach is to think of them as skillsets.

Have your own thoughts on how we label ourselves as researchers and whether or not this helps or hinders the pursuit of scientific discovery?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Website – Chandra Clarke’s Citizen Science Center, sharing open science projects.
  2. Web article – Angus Harrison, “Self-taught rocket scientist Steve Bennett is on a mission to make space travel safe and affordable for all – from an industrial estate in Greater Manchester,” interview in The Guardian online, April 4, 2019, https://www.theguardian.com/science/2019/apr/04/building-rockets-all-over-house-space-travel-safe-affordable-for-all.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Experimentalist, Theorist, Computationalist, Citizen Scientist: Work Identity or Work Ethic?”, The Insightful Scientist Blog, March 29, 2019, https://insightfulscientist.com/blog/2019/be-a-person-of-many-hats.

 

[Page Feature PhotoFedoras fill a costume rack at the Warner Brothers movie studio in Burbank, California.  Photo by JOSHUA COLEMAN on Unsplash.]

Misfits Matter

Misfits Matter

How to use the trial and error method to make a scientific discovery.

 

I like moving, exploring new places, and visiting friends and family (for short manageable doses).  I can put up with traveling for work.  But one thing never ceases to annoy me:  Whenever I take a shower for the first time in a new place, I can’t for the life of me get the knobs, handles, and faucets to work right the first time.  I spend at least five minutes trying to get the water to stop being boiling or freezing, or trying to get the dribble out of the shower head to be decent enough to rinse.  Maybe you can relate.

But I’ll bet it never occurred to you that how you solve this problem is really scientific discovery skills in action:  you start fiddling with all the water controls you can see.

That’s because it’s is a classic example of doing the right kind of trial and error.  So I’ll use it to outline what I think are four key dimensions that help structure trial and error for discovery:

  1. Putting in the right number of trials
  2. Putting in the right kinds of trials
  3. Putting in the right kind of error
  4. Putting in the right amount of error

The overall theme here is this — it ain’t called “trial and success” for a reason.  The errors are part of the magic…that special sauce…the je ne sais quoi…that makes the process work.

You may have seen versions of this idea in current business-speak around innovation and start-ups (the Lean build-measure-learn cycle anyone?).  But I needed to take it out of the entrepreneurial context and put it into a science one.

So let’s get down to brass tacks and talk about important aspects of trial and error.

 

4 Goals for Thoughtful “Trial and Error”

 

I’m going to keep the shower faucet analogy going because it’s straightforward to imagine hitting the goals for each dimension.  But to give this a fuller scientific discovery context I’ll add one technical example at the end of the post.

 

Dimension #1 — On putting the right number of trials into your trial and error.

 

Goal:

Keep running trials until you gain at least one valued action-outcome insight.

 

When you start out on a round of trial and error you are really aiming for complete understanding and the skill to make it happen on demand, with fine control.

In our shower analogy, that means it’s not just enough to know how to get water to come out of the spout.  You need to be able to control the water temperature, the water pressure, and make sure it comes out of the shower head and not the tub spout (if there is one).  Ideally, you’d learn enough to be able to manipulate the handles to produce a range of outcomes:  the temperature sweet spot for a summer day shower or a winter one; the right pressure for too much soap with soft water or for sore skin from the flu.

So one of the first things you have to figure out is: how do you know when to stop making trials?

This isn’t a technical post about conducting blind trials or sample surveys.  Here we’re talking about a more qualitative definition of done; the kind of thing you might try for an “exploratory study”.  Exploratory studies are the kind where you have no hypothesis going in.  Instead, you’re trying to find your way toward an unknown valued insight, not trying to prove or disprove a previous hypothetical insight.

The whole point of trial and error is to take a bunch of actions that will teach you how to create desired results by showing you what works (called “fits”), what doesn’t work (called “misfits”), and forcing you to learn why.

The “why” is the valued insight you’re after.

If you’ve run enough trials to figure out how to make something happen, that’s good, but not enough.  For scientific discovery you need to know precisely why and precisely how it works.

So keep running trials until you’ve come up with an answer to at least one why question.

 

Dimension #2 — On putting the right kinds of trials into your trial and error.

 

Goal:

Try a mixture of fits and misfits.

 

A key facet of trial and error is that by intentionally generating mistakes it will help create insight into how to generate success.

Partly, these trials are about firsthand experience.  Your job is to move from “wrong-headed” ideas to “right-tried” experiences.  To make changes to how you operate you have to clearly label and identify two things in your trial and error scenario–“actions I can take” and “results I want to control”.

Good trial and error means that you will: (1) learn the range of actions allowed; (2) try every possible major action to confirm what’s possible and what’s not; and (3) learn from experience which actions produce what outcomes.

In the last section I brought up the terms fit and misfit: in some science work, getting a match between an equation you are trying and the data is called a “fit” and getting a mismatch between the two is called a “misfit”.

So in science terms, that means you want your trials to be a mixture of things you learn will work (fits), things you learn won’t work (misfits), and, if possible, things where you have no idea what will happen (surprises).

For my shower analogy, let’s use a concrete example: the shower in my second bathroom, which both my mom and aunt have had to use (and, rightfully, complained about).

A photo of the handles that control the shower in my guest bathroom in my UK apartment.

So, for “actions I can take”: rotate left handle, rotate right handle, or pull the lever on the left handle.  And for “results I want to control”: the water temperature and the amount of water coming out of the shower head.

Then, I start moving handles and levers individually.  Every time I move a handle and don’t get the outcome I want, it’s a mistake.  But I’m doing it intentionally, so that I can learn what all the levers do.

Many of these attempts will be misfits, producing no shower at all or cold water or whatever.  Some may accidentally be fits.  Hopefully, none will produce surprises (though I have had brown water and sludge come out of faucets before).

I think this visceral experience is what allows your mind to stop rationalizing why standard approaches and methods should work and get on with seriously seeking out new and novel alternatives that actually work.

And these new and novel alternatives, with their associated insights, are the soul of scientific discovery.

So you want to move into this open-minded, curious, active participant and observer state as quickly as possible and trying fits and misfits will help you do that.

 

Dimension #3 — On putting the right kind of error into your trial and error.

 

Goal:

Make both extreme and incremental mistakes.

You know the actions you can take.  But you need to figure out why certain actions lead to certain results.

One great way to do this is to try the extreme of each action.

If it’s safe (or you have a reasonable expectation of safety) then pull the lever to the max, rotate the faucet handle all the way, cut out almost everything you thought was necessary, and see what happens.

In physics, this goes by the name “easy cases”.  What we really mean is use the extreme values, zero, negative infinity, or positive infinity.  Plug them in to your model and see what happens.  Does it break things?  Does it give wonky answers?  Does it lead to a scenario where the role of one term in the equation becomes clearer?

That’s the beauty of extreme tests when you’re doing trial and error.  They let you crank up the volume on factors so that you can pinpoint what they might do, how they might operate in your context.

So what about making “incremental” mistakes?  Just nudging things a little this way and a little that way to see what happens?

These are absolutely necessary too, and tend to happen later on in your trial and error process.  They are a great way to confirm and refine your understanding.

If you want to boil it down, making mistakes at the extreme ends of the action cycle hones your “this-does-that” knowledge, while making mistakes in small incremental steps helps clarify “how” knowledge.

So, often times, it’s best to go after extreme cases in the early trials and then move toward incremental cases later on.  For example, with the shower handles, early on you’ll probably try rotating one handle all the way to the right or left to figure out which direction brings hot water.  Later on, you’ll turn the handle a little bit at a time, until you get the right temperature.

 

Dimension #4 — On putting the right amount of error into your trial and error.

 

Goal:

Make mistakes until you can link all major actions with outcomes.

 

This one is easy enough to grasp.  To put it more bluntly: how many times should you mess up on purpose?

The goal statement says it all: make enough mistakes that you can link all major actions with outcomes in your mind, and you know why they are linked the way they are.

Just imagine if you were told that every move you made to try and set a shower, where you didn’t know the knobs at all, had to only be moving toward the right outcome (no errors allowed).  How the heck would you succeed?  You would have to look up a manual, or find someone who had used the shower before.  It would probably slow the process down to a painstaking pace.  It would stress you out.  And it would need pre-existing insight into how to do it right.

But in discovery, you won’t have that kind of prior insight.  No one does.  So you have to be willing to gets things wrong in order to start to generate that insight.

So keep getting it wrong in your trials until you really get why it doesn’t work.  Don’t avoid those misfit moments.  You should be able to make a table or a mind map of links between actions and outcomes.  If you can’t, keep making errors until you can.

 

The Four Trial and Error Dimensions in a Real Physics Research Example

 

I promised I would connect the ideas I’ve talked about to a science example, so let me do that:

For my Ph.D. neutrino physics work, at one point I had to write a piece of computer code that could reproduce a final plot and numbers in an already published paper, by the MINOS neutrino oscillation experiment, to make sure our code modeled the experiment well.  First, I wrote some code (to estimate the total number of neutrino particles we predicted this experiment to see at a certain energies) based on how my research group had always done it.  Then I wrote down in my research notebook how the existing code had previously been tweaked to produce a good match.  One value had been hand-set, by trial and error, to fit.

In the newer data published at the time, we knew this tweak no longer worked.  But at first I just tried it anyway (try misfits).  Then I started changing the values in the code (make incremental changes).  And we added a few new parameters that we could adjust and I altered those values (try unknowns).  I kept detailed hand lists of the results of my changes on the final output numbers (link actions to outcomes).

Then I synthesized these behaviors into new groupings: did it make the results too big, too small, by a little, by a lot?  Did it skew all the results or just the results at certain energies?  Was it a consistent overall effect, or some weird pattern effect?

At this point I kept many code versions to be able to have a record of the progression of my trials (fancy versioning software isn’t commonly used in small physics groups).

A screenshot showing some of the folders and files from my Ph.D. computer codes that required trial and error.

And I did handwritten notes where I worked through why certain outcomes weren’t produced and others were (try until you get insight).

 

Then I did it again.  And again.  And we did it for 10 more experiments totaling…well, a LOT of code.

In the end we got a good match and we were able to use it to complete my Ph.D. work, which explored the impact of a mathematical symmetry on our current picture of the neutrino particle.

So, trial and error, being able to willfully make mistakes to gain insight, can be incredibly powerful and remains a uniquely human skill.

As a 2011 study from Nature suggested, non-expert video gamers (i.e., many with no education in the topic beyond high school level biology) out-predicted a world-leading machine algorithm, designed by expert academic biochemists and computer scientists, in coming up with correct 3-D protein shapes, because they made mistakes on purpose while generating intermediate trial solutions.

Algorithms, by design, are constrained to do only one thing: get a better answer than they had before.  Every step must be forward; even temporary small failures are not allowed.

But we’re messy humans.

We can take two steps back for every one step forward, or even cartwheel off to the side when the rules say only walking is allowed.  Our ability to strategically move in “the wrong direction” (briefly taking us farther away from a goal) in order to open up options that in the long-run will move us in “the right direction” (nearer the goal) is part of our human charm and innate discovery capacity.  But that requires we acknowledge up front that in pursuit of discovery many trials will be needed, and many of them will not succeed.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Misfits matter.

Using trial and error in a conscious, structured way can move use from having thoughts on something to experiences in something.  Notice how “thoughts on” speaks to the surface, like a tiny boat on a broad ocean; while “experiences in”, speaks to the depths, like a diver in deep water. So try.  And err.  Welcome error by remembering that misfits matter and that a deep perspective is where radical insight awaits.  In taking two steps back for every one step forward, those two steps back aren’t setbacks, they’re perspective.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I shared the four dimensions that help define strategic trial and error: putting in the right kind and number of trials, and putting in the right kind and amount of error.
  • I shared an example of how trial and error has been used in my own physics work and in biology to get useful insights.

Have your own recipe or experiences related to trial and error?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – “Insight”, Wikipedia entry, https://en.m.wikipedia.org/wiki/Insight.
  2. Web article – Ed Yong, “Foldit – tapping the wisdom of computer gamers to solve tough scientific puzzles” Discover magazine website, Not Exactly Rocket Science Blog, August 4, 2010, http://blogs.discovermagazine.com/notrocketscience/2010/08/04/foldit-tapping-the-wisdom-of-computer-gamers-to-solve-tough-scientific-puzzles/#.XKPkLaZ7kWo.
  3. Website – MINOS neutrino oscillation experiment, http://www-numi.fnal.gov/.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Putting the Error in Trial and Error”, The Insightful Scientist Blog, March 22, 2019, https://insightfulscientist.com/blog/2019/misfits-matter.

 

[Page Feature PhotoAn ornate faucet at the Hotel Royal in Aarhus, Denmark. Photo by Kirsten Marie Ebbesen on Unsplash.]

Awaken Sleeping Giants

Awaken Sleeping Giants

Tell me if this sounds familiar to you:

You have a lightbulb moment.

A great idea you’ve never seen or heard before.  It seems like it could really move things in an amazing new direction.  You’re excited.  No, SUPER excited.  You deluge your friends and family with all the amazing, awesome outcomes your idea could have.

Once that first flush of excitement passes and the adrenaline from having had a genius moment settles you maybe start to look around for useful info on parts of your idea outside your knowledge base.  And that’s when it happens.  You come across a paper, a talk, a website, a colleague in conversation, where they discuss something painfully close to your supposedly “novel” idea.

The idea’s already been done.

To add salt in the wound, as you dig more you find out that “the idea”, what you thought was “your idea”, was tested out by some genius years ago.  And they’ve already written about it, or tried it out, and moved on.

*sound of your ego and hope deflating here*

In my case, the “offending paper” was written before I was even born.  That’s four decades old!  I never even stood a chance of getting the first idea on that table.

So what does any of this have to do with old papers that have low citation rates?  In other words, ideas that have been out there for a while, but nobody seems to care or talk about?

 

Deciding if the Old Paper in Your Reading Pile Should Still Be There

 

Well, as a matter of fact, the paper in my example was exactly that kind of paper—it had vanished into history like an unliked and unshared tweet or Facebook post.

But if you read my Research Spotlight summary (link at the end of this post) on a Nature paper about “Team Size and ‘Disruptive’ Science” you would have learned that researchers recently discovered a link between teams that publish more “disruptive” scientific papers, patents, or computer code and the research papers they cite:  Teams proposing new ideas more often cited old unpopular papers.  By unpopular I mean those old papers weren’t cited very often, ever.

It turns out that the paper that proposed the same idea I had was an old paper (well, older than I am) and nobody seemed to cite it.  I had a good handle on just how unpopular it was because it was written by a European physicist in my own exact research field, it was published in a respectable journal, the physicist gave talks about it…  And yet I’d literally never heard of him, his work, or his contribution to this idea.

Before I read the Nature paper I mentioned before on teams and disruptive science, I assumed that this paper I found and its lack of fanfare was a bad omen:  “That means his/my idea must be a bad one.”  I had a little pity party for myself and then I tucked the PDF and my notes into a file on my laptop only to review on rare and sentimental occasions.

But in light of reading the Nature paper, I’ve completely re-evaluated my attitude and thoughts toward both the idea and my predecessor’s paper.  Instead of setting it aside, I need to re-evaluate what low citations means in this case.

And as I thought about it more and included my own experiences in publishing papers, I realized that low citation rates could have at least three meanings for a paper.  I nickname these “the niche”, “the bad”, and “the visionary.”

 

The Niche Paper

 

For niche papers the low citation rate reflects the fact that no one really cares about the paper’s content.

 

There might be a few reasons for this.  One reason reflects the content itself.  It could just be an overly specific topic (like the singing habits of mice…don’t look shocked, mice do actually “sing”), or a topic that it’s nearly impossible to research because the tools and situations don’t exist yet (like extra-dimensional theories of how neutrino particles get their mass).

The other reason reflects a failure of communication.  Maybe the authors used completely different technical jargon or math notation than anybody else has in published work.  So even if we try hard, the rest of us just might not know what the heck they’re talking about.

But there’s a third possible reason suggested by reading a paper in Technological Forecasting & Social Change, which is the focus of this week’s Research Spotlight summary (link at the end of this post).  Maybe it’s an emerging field, working right at the edge of known knowledge.  As a result, it’s living in a sweet, but difficult, spot: at discovery’s edge.  At this point in history, it falls into a niche because both of the two above reasons will trip up the paper: (1) no one will care about it because it’s not “a thing” or “trending” yet; and (2) no one will understand what it’s talking about because the focus of study is so new or under-researched that many ideas, concepts and words will have to be invented to talk about it.

And by the way, don’t assume that “emerging” just applies to stuff in the last 5 years.  Sometimes emerging science takes decades to incubate, with just a few researchers keeping the embers alive, before it really takes off and becomes a new field of study in its own right.

Of course only the first kind of niche paper (the too specific) and the third kind (the emerging field) are potentially useful for breakthrough science, innovations, or inventions.  The second kind (the Greek-speak) just needs a good re-write.

 

The Bad Paper

 

For bad papers the low citation rate reflects the fact that the work it describes just wasn’t that good.

 

There are lots and lots of reasons, big and small, why a paper might be bad.  You could write volumes about this topic and, unfortunately, find lots of real examples to illustrate what you mean.  In fact, right now I bet you can picture an example you thought was junk work and that you still wonder to yourself, “How did that get (published/funded/awarded/bought/greenlit)?”

I have no desire to make this post a laundry list of complaints against certain papers I’ve seen (I have no patience with pessimism or destructive criticism).  The point here at The Insightful Scientist is to make progress toward scientific discovery and insight by finding fresh, valuable ways to move forward.  Not wallow and howl at the bad stuff people sometimes produce.

So let me stick to what you need to do here: recognize when a paper is “bad” so you can move on from it quickly.

Right now, I’ll just point out two reasons that are big red flags that you should avoid using a  paper at all, even to inform your own thinking, let alone to cite in one of your own writings.

First, if a paper uses inconsistent logic to either (1) justify its own findings or (2) compare itself to the works of others then you should consider it a “bad” paper and avoid it.  You don’t want that bad mental habit to rub off on you or to have your credibility tainted by association (you’ll need that credibility later on when you want to encourage a broader community to engage with your ideas).

Second, if a paper does not give sufficient information to evaluate its methods or conclusions then you should consider it a “bad paper” and leave it out of your information pile.  Again, it’s a bad habit, not laying out fully and clearly in writing what makes your work tick.  So do yourself a favor and find a better paper.  [The exception here is in sharing information about a patent or potentially patentable invention, where sharing too much detail could lead to problems in market competition.  But the answer is simple: if you publish you have an obligation to share.  The purpose of making something public by writing about it is to expand the public knowledge domain.  If you don’t want to share, don’t publish.]

What I like about using these two red flags, to seek and ignore bad papers that have wandered into your information orbit, is that you can check for them even if the paper is well outside your area of expertise.

And if a radical breakthrough is your goal, you should be reading outside your expertise.

I’ve been reading in sociology, biochemistry, and library sciences to try and answer a neutrino physics question (those other fields help improve my skill set ,which makes me more adept at tackling my own field).  Research suggests that this kind of intentional, broad information gathering can trigger radical insight.

Do what it takes to get the job done.  Read widely, and filter out bad papers as you find them.

 

The Visionary Paper

 

For visionary papers the low citation rate reflects the fact that the ideas presented are too far ahead of their time for others to recognize or act on yet.

 

I know, I know.  All you futurists, innovators, scientists, inventors, and entrepreneurs out there (myself included) are drooling over this category.

Visionary.

The word just smells of greatness, and we all want to make a contribution that will make it into this category.  So it’s only natural to get a little over-excited and want to label a paper related to your own “big dream” science or innovation as “visionary”.  It gives us a feel-good moment and a sense of fate, an image of what our own future might look like.

But if you remember my story from the beginning of this post, that kind of warm-and-fuzzy meets adrenaline-pumping moment is what got us into this awkward mess, sorting papers into categories, in the first place.  So here we are trying to be mature about this low citation paper and figure out what it means that someone else already came up with it, but no one paid attention.

On The Insightful Scientist I have made it my mission to learn how to be a pro at scientific discovery and share that with others.  So let’s get objective.  How can we tell if the ideas are ahead of their time?

I’ll assume that the paper has avoided any of the red flags that would make it a bad paper to rely on. (If you’re avoiding that evaluation because you’re afraid to see that paper not make the cut, have courage and be decisive.  If the paper is “bad,” it’s bad for your long-term discovery goals.)

As you evaluate the paper, remember that you’re at an advantage because you’re a “future human” 5, 10, 20, 40, even 100 years after the paper was written.  You know how some aspects of the “story” (i.e., the science) actually turned out and you can use that to help you evaluate.

Did this old paper have the right mindset—is it logically consistent, does it emphasize objectivity and evidence, and does it share information willingly?  Did other ideas presented in the paper turn out to be true or stand the test of time?  Did the paper get those ideas right, even though they were based on some false assumptions?  Are those false assumptions of the “past humans” who wrote the paper mostly a result of not having access to the data, technology, populations, or even big pots of money like we future humans have now?

What you’re really trying to figure out is if the authors had good research instincts (due to experience, mindset, or both), even in the face of limited resources.  If they did, then it’s possible they had honed their visionary skills about the topic and you might be looking at a visionary paper.  It may have provided a past blueprint for a good idea that the future can now act on.  If you want some examples of papers in this category, check out the link toward the end of this post.

And if your final decision is that the low citation paper you’ve got is visionary…build on it!

 

Learning to Sort Papers Like a Pro

 

If you remember, at the beginning of this post, I said this whole stream of thought came about because I had a low citation paper sitting in a neglected folder.  I’d originally, purely based on citation rate, dismissed it as “bad”.  But upon re-evaluating it I’ve decided it is  somewhere between niche and visionary.  I’m still working out which category I think it fits in best.

But the important point is that I’ve re-engaged with the paper and I’m wrestling with the science, ideas, and methods it presents in a much more thoughtful way.  I’m not falling in love with it (like a novice might) and I’m not dismissing it out of hand either (like an old-hand might).  I’m handling it like a pro who knows that when it comes to pursuing scientific discovery with deliberate skill, learning to distinguish between the niche, the bad, and the visionary is part of your job description.

 

Mantra of the Week

 

On a final note, before I sum this post up in a short bullet list, let me say this:

If you’ve read some of my past posts from 2018, especially the old versions, then you know I sometimes like to end with an artsy, one sentence tagline, and I use the post feature photo to illustrate it.

These one-liners are what I memorize to use as mantras when I start to get off-track during a task that’s supposed to help me innovate, invent, or discover.

This week’s one-liner is:

Awaken sleeping giants.

If you want to change the knowledge landscape then sometimes you have to dig into the past to find ideas that are sleeping giants.  Once awakened, the rumble and weight of their presence will cause heaven and earth to stand-up and take notice.  And as physicist Isaac Newton once supposedly said, “If I have seen further, it is by standing on the shoulders of giants.”

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I suggested a way to sort old unpopular papers in your information pile into three categories: the niche, the bad, and the visionary.
  • I pointed out why you should throw out papers falling into the bad category and consider building on papers in the niche and visionary categories.
  • I talked about how each of these categories of papers fit into the big picture of the pursuit of scientific discovery.

Do you have your own sorting and sifting criteria for papers?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – Carl Zimmer, “These Mice Sing to One Another — Politely,” The New York Times, February 28, 2019, https://www.nytimes.com/2019/02/28/science/mice-singing-language-brain.html.
  2. Web Article – “Like Sleeping Beauty, some research lies dormant for decades, study finds”, Phys.org website, May 25, 2015, https://phys.org/news/2015-05-beauty-lies-dormant-decades.html.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

Research Spotlight Summaries:

 

How-To Articles:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Low Citation Papers: The Niche, the Bad, and the Visionary”, The Insightful Scientist Blog, March 15, 2019, https://insightfulscientist.com/blog/2019/awaken-sleeping-giants.

 

[Page Feature PhotoStanding figure and reclining Buddha at the Gal Vihara site in Sri Lanka.  Photo by Eddy Billard on Unsplash.]

Three Keys

Three Keys

It’s that time of year when you suddenly realize that any goals, plans, or New Year’s Resolutions you have for 2019 already seem like a bad idea.  I certainly have.

I’ve been sick, I’m in the midst of a significant professional transition, and I still can’t even find the notebook where in December 2018 I wrote down all the wonderful things I hoped to make happen in 2019.   I only remember off the top of my head two goals: (1) eat more nutritious food daily and (2) practice my scientific discovery skills daily.  The food goal has been easier to do.  In fact, at least I’ve started on it!  But the discovery goal has only seen me spend two dedicated hours since 2019 started practicing my ability to invent new theoretical equations.

Why are some goals and habits so much easier to follow through on?  In particular for The Insightful Scientist why are some habits, like making progress on a big discovery goal, so hard to practice?  I think that some of the same basic things that hold us back from finishing fitness, diet, hobby, money or other personal goals also plague our ability to act on discovery goals.  So let’s talk about ways to fix our discovery habits and make 2019 a better discovery year.

 

3 Keys to Good Discovery Resolutions

 

You’ve probably heard these suggestions before, but let me remind you of three things that you should have in place to increase the chances that you’ll follow through on a goal.

In this case, I’m going to use my own attempts over the last three months to come up with a 30-day discovery skills mini-workout for myself as an example.  I always remind myself of best practices to develop new habits (and make room for them in my mind and life) by reading a post from Leo Babauta’s wonderful Zen Habits site (I’ve been a fan since 2010).  I’ll condense lots of Leo’s advice into a list of three keys for successful goals and habits:

 

  • Have a well-defined goal, so you know when you’ve succeeded.
  • Have a clear picture of concrete actions to take to achieve the goal, so you’ll act.
  • Have a way to monitor your progress towards your goal, so you’ll adapt and stick with it.

 

Let me break this general advice down and translate it into my specific example to give you one idea of how you might make these keys work for you and your scientific discovery goals.

 

1. Define Your Goal

 

Original Poorly Defined Goal:

Spend 30 days of daily practice focused on improving my scientific discovery skill set.

That was the original goal statement I had in mind in January.  Is it “well-defined”?  No.

You can tell if your goal definition is well-defined by how long it takes you to try doing your first day on a program, once you’ve fully committed to starting it as soon as possible.  For me it took me 19 days (I use a Bullet Journal to loosely track events, so that’s how I know) before I sat down and tried it.  That’s a nineteen-day delay after I said, “I’ll absolutely, whole-heartedly start this tomorrow”.

As a general guideline, I have since set seven days as the maximum time from firmly committing to start now and actually starting.  If it takes me longer than that, odds are good I don’t have a clear enough goal in mind, so I procrastinate.

For this example, I was looking for a discovery skill set goal, not a discovery project goal.  At the end of this post, I’ll come back and briefly talk about applying this idea to a topic-specific scientific discovery research project.  But for now, we’re talking about skills.

To come up with a better goal and hit the refresh button on starting my program, I did a few things.  I freed up space.  Literally.  I did a Getting Things Done mind sweep and emptied one room in my house of major distractions (pictures, books, papers, decorative objects).  Then I spent time in my de-cluttered space and asked myself the same question over and over again: what exactly do I hope to be able to do at the end of my 30-day discovery skills project that I cannot do right now?

For me, as a theoretical physicist, a key skill is to be able to generate an equation that represents a new physical idea.  In fact, generating equations is a key part of scientific discovery for many scientists.  So, that’s the skill I wanted to focus on first.  After two weeks of concentrated thinking, I came up with a working solution:  A daily practice I call “creative math”.  I can hear mathematicians groaning already, but physicists are notoriously more irreverent toward math—we’ll happily build the Lego equivalent of a Bugatti so long as it mostly gets the job done.

So, let me re-define my goal now using creative math.

 

Final Well-Defined Goal:

Over a 30-day period engage in at least 30 creative math sessions total, lasting no less than 20 minutes and no more than 1 hour each, with a minimum of 1 practice session a day, excluding Sundays.

 

Now we’re clearer.  I could make a simple tracker (maybe something Bullet Journal style, or using a phone app like Loop Habit Tracker or Habit Bull, or even use a simple mark on a calendar) and just check off as I succeed at completing each session.  And I can (and have) put two timers on my phone, one labelled “CMath 20” the other “CMath 60” to keep me on track during sessions (kitchen timers, Time timers, and Pomodoro apps have also worked well for me).

That’s one key in place to jump start my discovery skills program.  Two to go.  So what do I mean by “creative math” anyway?

 

2.  Define Concrete Actions

 

Original Poorly Planned Actions:

Spend at least 1 hour of butt-in-chair time practicing a discovery skill.

First, I should explain my quirky phrase “butt-in-chair time”.

I use this to specify what I mean by having actually tried on intellectual tasks.  For fitness goals, defining “try” and “effort” is easier: do so many reps, walk or run so far, lift a certain amount of weight, etc.  But how do we define a good level of try for intellectual tasks?  I define butt-in-chair time as the hours or minutes spent actively hand writing or typing up material directly relevant to producing the task outcome.  If you do more work standing up (whiteboard, or machine shop bench anyone?) then you might think of another phrase (Hand-on-board time? Powered-up-tool time?).

These sessions don’t have to be continuous, but the minutes have to add up to the target total.  If I just sit there thinking, having an internal conversation, checking email, WhatsApp or whatever, that time doesn’t count.  But if I’m writing in a notebook at a coffee shop (like right now), at my desk, on the tram, seated at a bus stop, on an airplane…you get the idea.  All of that time counts.  The total estimate won’t be perfect, but it does make me more honest about “Did I actually try?  Or did I just pretend to try?”

From the newly defined goal, I’ve set the activity for this butt-in-chair time as “creative math.”  The goal of the session is to generate an equation that represents a physical situation.  Over the course of the 30 days I should be able to see improvement (or lack thereof) in my ability to invent these equations.

First, I needed to devise a layout for this creative math.  I knew the final session “outputs” (the physical artifacts demonstrating that I had actually completed a session) needed to be pen-and-paper pages (these are easiest to buy everywhere and use everywhere; so no excuses for not completing a session).

Also, not doing it in a digital format (i.e., in an app or using software) helped with another aspect: I wanted to practice using internal conceptual resources, not pulling from external sources.  So, no textbooks, guides, internet searches, or even my own research notes, allowed during the session.  If I could learn to be competent without those tools, I could become more masterful with those tools.

This just left the overall format for my pen-and-paper pages.  At the time I was learning about Mike Rohde’s “sketchnotes” system as part of my on-going research.  So, I adapted his sketchnote task to my idea.  A sketchnote is traditionally a one-page sheet of handwritten and drawn notes, taken down during a talk or lecture, and designed to capture just the essential points, using descriptive doodles and hand drawn fonts.

I adapted this to come up with a creative math template using a one-page style with a central box, which emphasizes that I am looking for an equation, and a doodle and fancy typeface statement outlining the physical situation I want to describe with my equation.  Then I spend 20 minutes to 1 hour filling up the front side of the sheet of paper with keywords, questions, and phrases affecting the physical situation, which I then immediately put into a math form.  Toward the end of my session I combine all the math forms I’ve got into one final equation which counts as my “answer”.

My only other rule is that I avoid using common notation for any of my math.  I do that to avoid (1) biasing myself toward what I think the final answer “should” look like (this also slows me down and makes me more mindful of what I’m doing) and (2) cheating by using equations I already know from memory.

 

My first attempt at creative math. [Photo by B. K. Cogswell.]
You might wonder why I go to the effort of avoiding using things I’ve already learned when working a creative math practice session.  The reason will become clear when I discuss the third key to developing a solid scientific discovery skills practice program in the next section.

Before I close out this section, let me pull it all together and write down a new and improved concrete actions statement:

 

Final Well-Planned Actions:

Spend at least 20 minutes a day minimum, 1 hour maximum, of focused butt-in-chair time producing one page of creative math, at least six days a week.  A completed creative math page includes a statement of the specific physical situation being modeled, a doodle of that situation, and a final guess at one equation that describes an aspect of the situation.

 

Do you see how I keep moving from a generic desire to a specific intent of when and how to act and what specifically to do?  That mental transition is what you’re after before you start your own scientific discovery program.

Now we just need one more piece to have a solid plan we can start and finish:  we need some way to monitor our progress.

 

3.  Monitor Your Progress

 

Original Poor Tracking Idea:

Make daily practice pen-and-paper handwritten sheets and put them in a binder to get a portfolio of practice pieces.

 

Following on the sketchnoting theme and sticking to pen-and-paper, I initially planned to monitor my progress in a very visual and physically tangible way: I was going to make a pile of “stuff”.  The bigger the pile, the more practice I had under my belt!  That pile was going to be handwritten pages representing multiple attempts.  Like art students who have hundreds of practice sketches tucked into a portfolio, I would have creative math pages tucked into a binder.

This was a pretty solid first thought, but it did not get at the heart of my discovery practice goal.  Monitoring pages evaluates my level of consistency and the accumulation of practice hours.  Good information, but not the most important thing.

The most important thing to monitor is: are the invented equations I’m coming up with getting better over time?

To answer this, I had to come up with a simple but more sophisticated way to think about the equations I was creating.

First, I broke the equations into two elements: ingredients and connections.  Ingredients are math variables like mass, density, temperature, etc.  Connections are math operations like subtraction, powers, derivatives, etc.  I then developed a new template to go on the back side of a creative math page.  On it I list and count the number of ingredients and connections in my answer.  Then I look up the actual answer (in papers, sites, textbooks, etc.) and list and count its ingredients and connections.  Then I check to see how much I got right!  The goal is to get all ingredients listed with connections in the proper order.  Only pieces I get right count in my percent correct.

 

My second attempt at creative math and my improved way to monitor my progress. [Photos by B. K. Cogswell.]

This brings me back to why I use unusual notation.  I did two practice sessions in January, full of enthusiasm, and in my Bullet Journal started a collection called “Creative Math Ideas”, so I would have a stockpile of physical situations to use each day during practice.

 

My “Creative Math Ideas” bullet journal collection. [Photo by B. K. Cogswell.]

It turns out my enthusiasm was a case of running before I could walk.  They were all good questions, but many of the topics I initially picked did not have easy-to-find answers (the papers were too niche, or science didn’t have a clear answer yet).

To get around this I realized I needed to start out with simpler examples where I had already seen an answer, so I knew one existed.  But I didn’t want to cheat and use memory.  After all, at some point, even the simplest problems were all scientific discoveries.  Two hundred years ago, the vast majority of science known today hadn’t been discovered yet.

So even simple problems are good practice for discovering so long as you actually try to discover them for yourself.

By using non-standard notation and relying on personal experience rather than textbook knowledge, you can treat these problems as creative math candidates.

 

Schaum’s books I am using as a source of simple problems for my creative math practice. [Photo by B. K. Cogswell.]

Final Good Tracking Idea:

Estimate at the end of every creative math session how good my invented equation is, by writing down the percentage of ingredients and connections I got right, checked against a known correct math equation for that physical situation.

 

And that’s the final piece in place for a solid 30-day program to improve one of my scientific discovery skills.  I started it eight days ago and so far so good!

 

Trying the 3 Keys with a Discovery Research Project Instead of a Discovery Skills Project

 

You may like this idea of a discovery skills mini “boot camp” that you could do as a yearly goal or refresher.  But what if you wanted to adapt it to a project rather than a skill set?

I’ve dropped some hints along the way as to how this might change.  The three keys must still be met.  But for the first key your goal would be a physical output rather than an ability improvement.  For the second key you would tailor your butt-in-chair time to whatever output you need.  If it’s code it would be coding sessions, a physical prototype then building and modifying parts or refining a  schematic, and so on.

Unlike in my example, which had different problems each session, in each session you would now work on the same problem with one new variation.  If you’re doing equations then session 1 would have version 1 of that equation, session 2 would have version 2 of that same equation, and so on.  You might come up with variations by emphasizing an aspect that’s a strong physical limitation or by emphasizing a failed aspect of the previous version.

Finally, for the third key you would monitor progress by evaluating at the end of every session how well that version fits the solution criteria you need.  Is it cheap enough?  The right size, shape, or speed?  Does it explain the unexplained part?  Does it create the graph features you want to match?

If you spend a little time in the beginning thinking it through, you can come up with a 30-day kickstart to get you putting in meaningful time trying to discover what matters to you.

 

Final Thoughts

 

That’s my creative math practice in a nutshell and how I used three keys of good goal and habit setting to come up with it.  This creative math practice is how I got back on track with my New Year’s Resolution to practice my scientific discovery skills and become a better discoverer daily.  I’m running my 30-day creative math practice program right now and it’s already helped me notice some new avenues to explore in my physics research.

So let’s recap the ideas and examples I’ve talked about in this post:

  • I covered an example of how to define a 30-day practice program to improve your skills at inventing equations describing physical situations off the top of your head.
  • I discussed the three keys to creating a good practice program: (1) define a clear and specific program goal; (2) define concrete steps to take on a regular schedule; and (3) define a way to monitor your progress toward your goal.
  • I pointed out ways you might adapt my example scientific discovery skills program into a discovery research program by using the “butt-in-chair time” idea to produce new ideas on a regular schedule.

If you’ve got your own practice techniques I’d love to hear about them.  Or if you try out the program I’ve shared here I’d like to know how the experience goes.  And if it helps inspire you to a breakthrough let me know!  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Blog Post – Leo Babauta, “Set Powerful Deadlines,” April 26, 2016, https://zenhabits.net/deadlines/.
  2. Web Article – Andrew Krok, “Life-size Lego Bugatti actually works, has over 1 million pieces: It gets its power from 2,304 Lego electric motors”, Roadshow Reviews by CNET, August 30, 2018, https://www.cnet.com/roadshow/news/lego-bugatti-chiron-life-size/.
  3. Blog Post – Mike Rohde, “Ideas not Art – Students learn how to use sketchnotes to improve their notetaking in lectures”, December 31, 2018, https://sketchnotearmy.com/blog/2018/12/31/ideas-not-art-students-learn-how-to-use-sketchnotes-to-improve-their-note-taking-in-lectures-1.
  4. YouTube Video – Dr. Ellie Mackin Roberts, “Research #BulletJournaling”, December 7, 2016, http://www.elliemackin.net/blog/category/bullet-journal.

 

How to cite this post:

 

Bernadette K. Cogswell, “Three Keys to Creating a Discovery Skills Practice Program”, The Insightful Scientist Blog, March 8, 2019, https://insightfulscientist.com/blog/2019/three-keys.

 

[Page Feature Photo:  Keys in an equipment room in China.  Photo by Chunlea Ju on Unsplash.]