Author: Bernadette K. Cogswell

Ready to train your discovery skills in 2026?

Ready to train your discovery skills in 2026?

I know, it’s been a hot minute since I posted on InSci!

I’ve dedicated a lot of time to teaching. But wow has working with my students been a game changer for me.  I’m finally ready to start crafting what I know into tools and trainings for others to pursue discovery.  I’ve been working hard behind the scenes to bring you a first online course on how to elevate your discovery skills some time in 2026.

If you’ve read the blog over the years you know a little about my journey.  I dreamed of becoming a physicist as a child, but dropped out when I started college.  I eventually went back to school in physics and got my Ph.D. I got to work with amazing people at some of the best academic institutions in the world. And I worked on incredible science: studying how to reduce the threat of nuclear weapons and digging into the behavior of one of the most elusive building blocks of the universe.

But doubt and fear held me back from pursuing science at the level I wanted.  I didn’t want to make incremental gains.  I wanted to make discoveries.

It took me years before I found my confidence and voice in physics.  But it’s never too late to become the person you always wanted to be, performing at the level you always knew you could.

In 2026, we are going to level up together!

Are you looking forward to a course that will let you sharpen your discovery skills?  What journey brought you here and what excites and scares you the most about pursuing discovery? Let me know in the comments!

 

Photo by Joyce Hankins on Unsplash

Point of Origin

Point of Origin

 

On the influence of tracking the evolution of your ideas on the pace of discovery.

 

Have you ever moved, or had a big change in your situation, and when you started sorting through everything you wondered why you kept it all?

I have been looking through all the handwritten paper notes I scanned just before I left England (…more than 1,476 sheets of notes!…).

They are red, purple, and green block notes.  Mysterious half-sentences jotted down in equally bright felt tip pen.

They are small Moleskine pages stained with decaf coconut flat whites.  Coffees bought at the chain Pret a Manger for my morning tram ride to work in England.

And they are neatly laid out calculations on blank pages.  Carefully crafted while sitting at my temporary desk.  Each time anxiously listening for the buzz of wasps through the open window in high summer in a building with no air conditioning.

Why did I keep them all?

Because I believe in the value of tracking the evolution of your ideas.

I think it can emphasize when you are harping on the same old theme.  It can point out when you have failed to try something different.  And it can highlight when you have made progress over the course of time.

All of this evidence can speed up the pace at which you gain new insights, and hence the pace of discovery.

Tracking ideas can also remind you of the reality of how you actually arrived at some inflection point in your progress.

And it can pinpoint when you suddenly veered into promising territory.  (In the lean innovation and startup world, this same concept is called a pivot point in product development.)

 

In principle, tracking the evolution of your ideas speeds up discovery because we have bad memories

 

We have very selective memories.

I won’t quote a bunch of psychology literature here since most of us will recognize from experience the existence of the following ideas.

How many times have you argued with a family member or colleague about something they say they don’t remember happening?

Psychologically we do have selective memories, a result of “selective attention”.  We only retain some things as important enough to remember and other things we ignore.

Have you ever debated with a family member or colleague and they loop back to the same argument?

Sometimes, no matter how many times you state your case from a different angle, they keep coming back to the same point.  A point you think you’ve already rationally and calmly explained to them is no good.

Our brains do literally have a thinking pattern, called “einstellung”, in which they get stuck on a particular loop that is more accessible in our memory.  Our brain can’t get past that idea to try other solutions or take other lines of thought.

Another trick our mind plays on us is to engage in something called “sunk cost bias”.

This is the belief that items we have invested our personal time and money in are more valuable than they actually are.

So once you’ve latched on to a particular train of thought (or your colleague with the “crazy theory” has) the more time you spend on it, the more convinced you’ll be that it’s valuable.

(Unfortunately, we also have a mental predisposition to believe that more complex theories are more likely to be true than simpler ones).

The point is, our minds are not perfect repositories and mirrors.

Our memories don’t capture in exact detail everything that happens to us.

And our  minds can’t reflect back to us precisely what need, when we try to recall a set of events or information.

But science is full of discoveries that were driven by personal events and private internal themes.

These themes kept driving the discoverer to make certain idiosyncratic and, it turns out, progressive choices at different points along their path.  (To see an example of this at work in someone other than our beloved Albert Einstein, see the link on the discovery of high-temperature superconductivity in American Scientist below).

In some cases, these discoverers were aware of these themes in their choices, but at other times they were not.

So imagine how powerful it would be if you could see these themes, as they play out.

Powerful why?

Because being able to see the evolution of your ideas and themes would give you the ability to change themes at will. It would also allow you to recognize nontraditional inputs, linked to the theme, that might also push you toward discovery.

Hoping to recognize your evolution and thematic drivers by chance is bound to be slower, a sort of random walk.  In contrast, doing so with intent is an efficiency-driven algorithm.

 

Being holistic, tracking the evolution of ideas mobilizes and harmonizes environmental forces to speed up discovery

 

Not only would knowing your own intellectual history and ancestry help you make discoveries faster, but a realistic picture of how discoveries are made would enable powerful social forces to come into play.

At the level of policy, having a clear awareness of what it takes to make a discovery would allow more supportive policy making decisions.  This means knowing how long, by what actual means, with exposure to what themes and ideas, and according to what personal choices a discovery was made.

At the group or organizational level, having an honest and holistic understanding of the scientific discovery process allows a group to better synchronize with discovery goals.  It may highlight when bringing in a new person, a new department, or a new topical theme is useful.  Or it can elucidate when new resources or more time are best given to the team already present to incubate discovery.

 

In practice, tracking the evolution of your ideas can be achieved through two activities

 

On a practical level, tracking the evolution of your thoughts requires two different mindsets to be at play (though not at the same time) as you move through your investigation process.

Let’s call them the “logging mind” and the “reflecting mind”.

(In the study of learning, related concepts are the “focused mind” and the “diffuse mode mind”, respectively).

These two mindsets naturally lead to two sets of activities to engage in during the investigation process, when you’re trying to track your intellectual heritage.

The first activity uses the logging mind and is where you record your exposure to various ideas, themes, individuals, sources, and activities.

I have alternately logged these things on sticky notes, in notetaking apps on my phone, in spiral notebooks, and on block notes, over the years.

In the last two years I have started to record, along with a one-sentence reference to each item, one of two additional tags added to the item.

Take for example the cryptic block note, “Network Analysis”.

The first tag might be a place, such as “Chicago conference on CEvNS”.  (Or tags might be simpler like “Nashville, TN” or “Schipol Airport”).

The second tag might be a date such as “F.11.22.2018”.  (The “F” stands for Friday.  I use M, T, W, R, F, S, and U for the days of the week).

I find the combination of these two tags and a note allow me to bring up in my memory, by association, what I was doing, how I came in contact with the item, and why it struck me as important.

(Sometimes I can rely on just the date tag, if it’s memorable enough.  For example, around the date I moved U.S. states or countries, birthdays, holidays, and very sad family events stick with me.)

This associative thinking mode is actually much more reliable than a chronological one.

Research has shown that our minds are especially good at recalling visual-spatial information—such as places.  (This is famously used in the “memory palace” or “method of loci” technique by world champion memory athletes).

So for the conference tag example above, upon seeing the item, I might even be able to remember:

  • where I was sitting (the lobby of the University of Chicago Physics Department building eating a Starbucks snack),
  • what I was wearing (a much loved fuchsia and burgundy flannel shirt with a favorite pair of Italian Murano glass earrings),
  • the internal conversation I was having (about using network analysis of publications on a scientific topic to inform community white papers and roadmap documents), and
  • what had just happened that made me jot down the note (interviewed researcher Andrey Rzhetsky about an article he co-authored using network analysis to track the efficiency of group discovery in science).

 

The second activity uses the reflecting mind and is where you record your reactions and responses to the investigation process and the items recorded in the logging mind activity.

For example, keeping a research journal and “freewriting” about what you are thinking at regular intervals can work.  Just be sure to include personal details, such as what is going on in your life and environment.  And note your personal reactions towards events and evidence (a “reflecting mind” activity).

You’ve also seen how piecing together a train of thought, which is what you do with the “reflecting mind”, can lead you to an awareness of what is affecting your work and what themes are driving your process.

For example, I shared with you the Netflix-driven incidents that honed my working definition of scientific discovery in another post (“Don’t Curate the Data”, see link below).

That train of thought came to me after reading a bunch of philosophy literature.

Feeling dissatisfied with what I had read, I found myself unable to purge the language and ideas others had used and move in a different direction.

To get past this kind of einstellung, I made a lateral move.  Instead of reading more I watched TV.

I browsed according to what themes called to me—craftsmanship, a sense of honor, nobility, care, handcraft, and diligence—and which I felt defined the spirit of scientific discovery.

These new spark points were not enough for an operational definition testable in the lab, but they were enough to guide me toward different themes.

I was very diligent about capturing my thoughts on block notes at the time.  So, I was able to recognize the old themes that were causing me dissatisfaction—categorization, thought, chronology—and consciously turn toward new themes that I wanted to include—quantitative, applied, craftsmanship.

Then I actively based my new efforts on that mental shift.

Within two weeks I had generated my own new definition of scientific discovery that I have not come across elsewhere in the literature, after six months of trying to come up with something new.  (And I am working on putting together historical case studies that illustrate the merits and shortcomings of this definition, for publication in a peer-reviewed journal).

But without being able to look at my point of origin, even if only at one turn in my path, I would not have been able to consciously make this mental shift.

This kind of clear-sighted awareness and finesse is what more discoverers need to help them make smart choices and shift their thinking when the situation calls for it.

 

By analogy, tracking the evolution of your ideas is making visible an invisible maze

 

I have seen many versions of how to track the evolution of your ideas.

I’m still working on finding my own best way, which supports my intention of becoming a Maestra of scientific discovery and the scientific discovery process.

Sometimes trying to find our way toward a discovery feels like an invisible maze where we encounter many dead ends, or end up right back where we started.

By keeping a record of our thoughts and influences we make the maze visible.

And we give ourselves an aerial view of our point of origin and the paths we have traced out in our minds and with our actions.

Knowing your point of origin and where your thoughts have wandered can help speed you toward undiscovered territory, by showing you the paths less travelled.

 

Interesting Stuff Related to This Post

 

  1. Gerald Holton, Hasok Chang, and Edward Jurkowitz, “How a Scientific Discovery Is Made: A Case History”, American Scientist, volume 84, July to August, pages 364-375 (1996), freely available on Researchgate from one of the co-authors at https://www.researchgate.net/publication/252275778_How_a_Scientific_Discovery_Is_Made_A_Case_History.
  2. Daphne Gray-Grant, “Why you should consider keeping a research diary”, Publication Coach, October 23 (2018), https://www.publicationcoach.com/research-diary/.
  3. Memory palace technique at the Memory Techniques Wiki, “How to Build a Memory Palace”, https://artofmemory.com/wiki/How_to_Build_a_Memory_Palace.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Point of Origin: On the influence of tracking your ideas on the pace of discovery”, The Insightful Scientist Blog, November 29, 2019, https://insightfulscientist.com/blog/2019/point-of-origin.

 

[Page feature photo: An aerial view of the maze at Glendurgan gardens, built in 1833, in Cornwall, United Kingdom.  Photo by Benjamin Elliott on Unsplash.]

The Seduction of “Eureka!”

The Seduction of “Eureka!”

Many of us believe we struggle because we can’t come up with ideas.

 

My new opinion is that having “breakthroughs” is not the reason why we struggle with scientific discovery.  Knowing what to do after you’ve had a breakthrough is where you’ll have challenges.

I have come across people who self-identify with one of two camps when it comes to “coming up with ideas”:

One camp believes it is “creative” and is good at coming up with ideas.

This creativity may be perceived as labor intensive (“I need a lot of time to think”), as idiosyncratic (“I only do my best work when I work after midnight while listening to songs from the musical “South Pacific” and writing while standing at the kitchen counter”), or as mystical (“Things just come to me when I dream, or they pop into my head in the shower”).

The other camp believes it is “not creative” and will not be able to come up with ideas.

This lack of creativity may be perceived as a biological trait (“I just wasn’t born with the creativity gene”), as practical (“I just stick to the facts and don’t let my imagination get carried away”), or as un-learnable (“I’ve just never gotten the hang of thinking up stuff”).

 

We all want to engineer Eureka! moments into our workflow.

 

“Coming up with ideas” is just another phrase for a “breakthrough”.  Or in the case of science, we call these ideas or breakthroughs “scientific hypotheses” (and when they are proved right they become “scientific discoveries”).

Most people I’ve met believe that what holds them back is the inability to engineer a breakthrough moment.  They think that scientific discovery eludes them because of their inability to come up with a good idea.  So they believe they struggle with generating magical Eureka! or Aha! moments, where things come together and new understanding suddenly appears.

In the pilot Insight Exchange event, where I brought together academics from different science fields and at different career stages to talk in small groups about what was holding them back from scientific discoveries in their own work, the most consistent piece of feedback I got afterward was that people wanted me to give them more strategies to engineer breakthroughs.

 

But we already have breakthroughs daily because we’re hard-wired to see meaning and patterns.

 

I recently learned about the work of neuroscientist Robert Burton on the cognitive and emotional basis for feelings of “certainty” (the belief that our understanding of something is accurate).  According to Burton, we are cognitively hard-wired to come up with ideas, i.e., breakthroughs.

More importantly, we are built to experience feel-good sensations when we believe we have achieved a breakthrough, i.e., when a spontaneous and unconscious understanding rises to consciousness.

That feel good sensation arrives in the form of dopamine, a chemical released in the brain that triggers the brain’s reward and pleasure centers.

There are a couple of important aspects to this finding.

First, being rewarded for achieving a feeling of certainty about our knowledge encourages us to do it again.  Like any pleasurable event, we seek to repeat or renew those pleasant feelings.

So Eureka! once, and you’ll want to Eureka! again and again.

As an aspiring discoverer, this probably all sounds pretty good.  It might appear like we are biologically designed to experience pleasure when we discover things, which would encourage us to discover more things.  It seems like a progress-promoting positive feedback loop, right?

Maybe.  But the seduction of Eureka! is a double-edged sword.

Why?  Because we experience the pleasant sensations and dopamine hit when we believe that we have understood something, even if our understanding is wrong, such as when it’s based on incomplete information.

Basically, we search for meaning and patterns and our brain rewards us when we find meaning and patterns, no matter what (you can read more on this in one of Burton’s articles published in Nautilus, which I’ve linked to below).

 

Unfortunately, our brain’s reward system doesn’t depend on whether we’ve got the right pattern or meaning.

 

Our internal reward centers are indiscriminate.  Come up with a wrong explanation that your brain at least perceives as a reasonable possible pattern and you can still feel the exact experience of an Aha! or Eureka! moment.  Even if you’re dead wrong.

A second important aspect is that we have intentionally evolved to recognize patterns and assign meaning to information we receive.

Burton uses the classic example of our ancestors recognizing lions (a pattern) and knowing what seeing a lion means to a very tasty looking pre-historic ancestor (the meaning).  We need to be able to put together growling, fur, four legs, claws, teeth, maybe a jungle or savannah plains, that the sun is high in the sky means feeding time, that lions eat smaller animals like us, etc. in order to be able to say “Aha!  I’d better run before I get eaten!”

We need to be able to combine many types of sensory information (visual, auditory, smell, tactile perceptions of temperature and time of day) and experiences (seeing lions eat other animals or even other people) together in order to be able to recognize one pattern (a hungry lion) and its meaning (I’m in danger).

What I am trying to drive home is the point that the two pieces that combine to make a breakthrough–pattern recognition and meaning-making–are processes each and every one of us engage in every second of every day.

We are creating hypotheses about how people interact with us, what world events mean for our lives and livelihoods, how the weather will affect our health and plans for the day, and what the ending to the TV show we are watching or book we are reading will be.

Many of the ideas that we have about these things will be right, but many of our ideas will be wrong.

It is the same process as scientific discovery—we acquire data, we search for patterns, we perceive patterns, and we make meaning from those patterns.

I don’t need to give you strategies to experience breakthroughs.  You’re doing it all the time.

But as Burton’s work highlights, the problem is that many of our breakthrough ideas are just wrong, even when we feel sure they must be right.

 

The real trick is to sift through all the wrong-headed Eureka’s to find the one Eureka! that’s actually accurate.

 

If I could go back and give my Insight Exchange participants a new take home message I would point out to them how many breakthrough ideas they had already had.  They had probably already thought up and dismissed ideas about new methodologies, new sources of funding, and reasons why certain pieces of data might fit together.  But they had also already discarded many of those ideas as too silly, too hard, to unlikely, to flaky, or to unfounded.

That they had discarded ideas was not the problem.

The problem was, when they dismissed those earlier ideas, they had also subconsciously and simultaneously dismissed their skill in thinking up new things.

It was this failure of self-awareness that was harmful to their forward progress.

Many of them had put themselves in the “I’m not creative” camp and so they had fixated on finding new ways to become capable of coming up with ideas.

They were focused on fixing an imaginary problem.

You have had many ideas, you are having ideas right now, and you will continue to have ideas.  That’s the take home idea I wish I’d given my pilot Insight Exchange group.

 

So, the discovery part comes in what you do with any ideas you have.

 

In Burton’s Nautilus piece, he hints at the fact that that we are more likely to latch on to false meaning and patterns (which, remember, our brain finds just as rewarding as accurate meaning and patterns) when we have limited or inconclusive data.

Hence, the activities and skills we need are not just how to evaluate ideas, but also how to evaluate and gather data when what we have is inconclusive or limited.

And the mindset we need is just to be aware that no matter how much information we have, we are always, on some level, operating in a world of limited and inconclusive data.

The above two sentences might sound familiar.  They are called the scientific method.

It is well-designed to help us react wisely to our internal hunger for Eureka! so that we can find the accurate, and not just the available, the explanation.

Formulating a cohesive understanding is still very much a work in progress for me and I do much of that thinking “out loud” here in the pages of The Scientist’s Log.

As Burton cautions, searching for certainty in our understanding can be a dangerous game of giving ourselves what we want, instead of giving ourselves the truth.

But Burton also proposes that the best remedy is to give up certainty in favor of “open-mindedness, mental flexibility and willingness to contemplate alternative ideas” (Scientific American, 2008).

Thus, it turns out that fighting for the alluring Eureka!, those lightbulb moments from cartoons, isn’t the struggle we discoverers have to overcome.  It’s the siren song of Eureka! and its pleasurable aftermath that we need to learn not to pursue at all costs.

The word “Eureka” derives from the Greek meaning for “I found it.”

The ideas we find lurking in our minds are sometimes new sources of illumination rising from the depths of the sea of knowledge.  But other times they are just jetsam and flotsam washed up on the beach of bad ideas.

The discoverer’s way is to learn to tell the good lightbulbs from the duds and to treat the pull of Eureka! like a pleasant pastime and not an alluring addiction.

 

Interesting Stuff Related to This Post

 

  1. Robert Burton, “Where Science and Story Meet”, Nautilus (April 22, 2013), http://nautil.us/issue/0/the-story-of-nautilus/where-science-and-story-meet.
  2. Robert Burton as interviewed by Jonah Lehrer, “The Certainty Bias: A Potentially Dangerous Mental Flaw”, Scientific American (October 9, 2008), https://www.scientificamerican.com/article/the-certainty-bias/.
  3. David Biello, “Fact or Fiction: Archimedes Coined the Term “Eureka!” in the Bath”, Scientific American (December 8, 2006), https://www.scientificamerican.com/article/fact-or-fiction-archimede/.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “The Seduction of ‘Eureka!’”, The Insightful Scientist Blog, November 15, 2019, https://insightfulscientist.com/blog/2019/the-seduction-of-eureka.

 

[Page feature photo: Unusual junk, a lightbulb, washed up on a beach in South Africa.  Photo by Glen Carrie on Unsplash.]

 

Spring and Well

Spring and Well

On the website, I focus on how to foster your individual ability to make scientific discoveries.  It’s your individual contribution that’s emphasized, even if you work as part of a team, group, or formal collaboration.  If you’ve read many of my posts, you will know that I have so far divided aspects of an individual’s discovery ability into four major themes (which I use as tags to categorize The Scientist’s Log blog posts): activities, knowledge, mindset, and skills.

Let me take an opportunity in this post to clarify how I define these themes, how I think they support scientific discovery, and, most importantly, tell you which one I think every discoverer should focus on and why.

 

Knowledge is recognizing what you don’t know

 

This may sound counterintuitive, but, when you’re pursuing scientific discovery, obtaining a good stockpile of knowledge is really about recognizing all the things you don’t know.

Let’s do a little experiment:

Below I’ve listed three questions.  Read them over and then decide which question you think is most likely to lead to a breakthrough scientific discovery in the next 5 years:

 

  1. Why and how to mice sing?
  2. How do neutrino particles acquire mass?
  3. Where did Amelia Earhart’s plane crash on her final flight?

 

Do you have a guess?  Okay.  Now stop and think about how you even began to tackle picking a question.  Did you have to stop and try to define words for yourself, like what does she mean by “sing”, or what is a “neutrino”?  Did you try and do a web search to read a few quick headlines from search results to see if any of the questions was a decoy, i.e., it has already been answered? (Did it even occur to you that I might include a trick question?)  And in trying to pick a question, were you struck by how you knew little or nothing about some or all of the topics behind the questions (biology and zoology for question 1; particle physics and mathematics for question 2; history, oceanography and aviation in question 3)?

All right.  Now, suppose I give you a different set of three questions and ask you to again decide which question you think is most likely to lead to a breakthrough scientific discovery in the next 5 years:

 

  1. What’s the most efficient way to butter toast?
  2. How can we teach self-driving cars to avoid hitting pedestrians?
  3. Why are bee colonies vanishing at an accelerated rate?

 

Did you have a totally different reaction to this set of questions?  I’m willing to bet money you feel more comfortable with your response to this second set than you did with the first set.  Why?  Because most of us are much more knowledgeable about the second set of topics than the first set.  We have some of the necessary knowledge to help us make an assessment.  Whereas, in the first set of questions, we don’t know enough facts to begin to guess.

It’s not knowing the facts that’s important.  It’s knowing enough to know the limits of what the facts are and what they can tell you that counts.  Discovery is all about finding out something new.  That means discovery starts where the facts fizzle out.

So that’s why I emphasize knowledge as a key theme in productive scientific discovery efforts.  Knowledge is your perception and awareness of observations and facts about the world around you.  You have to know enough to recognize what you still don’t know; and you have to know enough to realize that the gaps in what you know matters to more people than just you.

 

Mindset is caring enough to find out what you don’t know

 

Of course recognizing that something important is unknown isn’t enough by itself.  We’ve all had conversations in our down time when we come up with brilliant questions, ideas, or inventions while talking or joking around with friends or family over a coffee or a beer.  But when the conversation ends, so does our interest in following up on that spark of insight.

And these sparks are also often inspired by suddenly commiserating on how the facts or inventions have failed to help make our lives or day better or easier at some key moment.

But if, when we reach that moment of recognition, we just stop at commiserating (when we’re with others) or musing (when we’re alone), then discovery would never happen.  That spark has to light an intense caring inside you; a desire to fill that gap or invent that bridge between where the world is and where you would like the world to be.

That’s why mindset is another core theme for scientific discovery.  Mindset is the intention you hold inside about what to do with your knowledge.  Your intention has to be to pursue discovery.  Discovery won’t pursue you.  Without the right mindset even if you happen to find yourself in a discovery moment you might pass it by without realizing it, or worse, think it’s too much hassle to follow-up on.

 

Skills are procedures you use to channel caring into doing

 

“Discovery awaits the mind that pursues it,” as the saying goes here at The Insightful Scientist.  “Pursue” is a big, wide-open word.  It’s a word that is made into something concrete through skills.

Skills are what you do to put your mindset into practice.  For example, if you value adapting ideas from one field to another then you read widely in different fields; or if you believe that trying it out as soon as possible to get real time feedback is key, then you will become adept at building prototypes or toy models.

I always think of skills as a carefully choreographed sequence of things you do with your body and mind in order to achieve some outcome.  The example that always comes to mind for me is actually skill I never perfected, fishing.

My ever patient grandfather, who loved to fish and did so constantly after he retired, tried very hard to pass the skill on to me starting when I was young.  He bought me my first fishing pole as a gift when I was around 4 years old.  It was a tiny, little kid’s special pole, white and pink.  I thought it was awesome, although I wasn’t too sure about the scary sharp looking hook.

The first time I tried to cast the line by myself, after a suitable instruction session from my grandpa, I swung the pole back and then forward hearing the reel make a gravelly unwinding sound.  I started to try and tighten the line when I heard my grandfather say in a very calm but firm tone, “Looky here Bern, stop what you’re doing.  Don’t move.  Now turn around real slow.  And don’t jerk the line.”

I was always the kind of little kid who was a goody two shoes, so I followed instructions, and turned around slowly.

It turns out I had embedded my hook in my grandfather’s head when I had swung it back to cast it.

Now I would like to say that this story ends well and that I became more skilled as I grew up and spent vacations with my grandpa.  Not so much.  I did learn not to hook people on the back swing.  But instead I developed a knack for catching anything but edible fish on my line, losing the hooks, or having to cut the line (and once almost losing the fishing pole when a fish took the bait and nearly jerked it out of my hands).

I caught baby sharks, by accident.  I caught sting rays, by accident.  I caught pufferfish, by accident.  I was supposed to be catching catfish and flounder and other things my grandpa would cook up in a bountiful fish fry.  But I never got the skill right; I never moved my bait just so, with the right pacing of movements and flicks of the wrist.  My fishing was like someone breaking dancing badly in the middle of a slow waltz.  My choreography was just all wrong.

So that’s skills.  Skills are being able to coordinate your mental focus and physical movements to choreograph a sequence of actions that earn you your desired result.  Without scientific discovery skills you can’t infuse your intention with action.  Nothing will get done, nothing will get discovered.

 

Activities are tasks you complete to finish skilled procedures

 

Of course skills are complicated.  Like I said, they are a little like choreography and emphasize moving through a whole sequence.  They are a whole chain of actions and thoughts moving together toward a desired outcome.  But it’s impossible to learn or master, let alone perfect, such a complicated procedure without breaking it down into small, doable tasks.

Those tasks are what I have called activities.  Activities are the ten to twenty minute bursts of really focused intention and action that you take to accomplish one thing small thing.  The key is, activities focus on the one small thing, while skills try to pull off the whole big project.

In traditional science education we teach a set of skills related to scientific discovery, such as using statistics, handling scientific equipment, solving math problems, and scripting code.  In traditional science practice we learn a few more skills related to scientific discovery, such as how to critique methodology, writing presentations, pitching ideas for funding, and supervising others to carry out assigned activities.

We usually refer to these as hard skills (i.e., technical skills training) and soft skills (i.e., professional skills training).

But somewhere in there, if part of our goal is to make a scientific discovery, has to be some room for discovery-centric activities.  How do you type-up a one year discovery roadmap?  What sections and topics do you need to create and maintain in a discovery researcher’s notebook?  When you read about the top ten scientific discoveries in a given year, what information about how they were achieved do you need to jot down and follow-up on in order to acquire new skills, new knowledge, and more mindset hacks?

These are all activities.  Activities are where we live the day-to-day of our jobs and lives.  So it’s no surprise that activities are also where our discovery paths lie waiting.

 

Skills are the bridge between a Dreamer and a Discoverer

 

So which of these themes—knowledge, mindset, skills, and activities—do I think is the most important for scientific discovery?  In other words, if you are short on time or the stakes are high then which theme should you hold on to while you let go of the others?

My personal choice would be skills.  Skills are the balance point between the big picture and the details.  By focusing on skills you keep both in sight.  And skills are the point at which discovery stops being a noun and starts being a verb.

I think you should go after skills first because they will carry you the farthest toward your goal.  There’s an interesting idea currently popular among the life hacking crowd that you should pursue a 1% change every day in order to see significant improvement over the course of time, rather than trying to improve by, say, 30% all at once.  The idea is that the 1% changes are easier to stick with, but just as valuable if you actually stick to doing them consistently.  In contrast, sometimes if we make the 30% change we fall off the wagon too quickly and the benefits don’t stick.

So if you’re looking for your 1% on scientific discovery, I would say go after skills.  Try to define discovery-centric skills.  Try to model discovery-centric skills.  Try to practice discovery-centric skills.  Even if it’s just for 1% of your day (and if you work for a typical 8 hours a day, that’s only a whopping 4.8 minutes of your time).

Of course. You don’t want to neglect the other themes in the long run, but I think you’re best bet of seeing meaningful improvement will come if you invest your time in your discovery skills.

I’m developing a vision for The Scientist’s Repertoire (previously called How-to Articles, and before that The Physicist’s Repertoire) to help you and I focus on this crucial skills component.  Because skills are like both spring water and a water well for scientific discovery: they are a spontaneous, natural source of fresh new energy and a man-made reserve of  fluid resources.  Emphasizing skills can give us a wellspring of ability to power our pursuit of discovery.

 

Interesting Stuff Related to This Post

 

  1. James Clear, “Marginal Gains: This Coach Improved Every Tiny Thing by 1% and Here’s What Happened,” online article at jamesclear.com, excerpted from his book Atomic Habits, https://jamesclear.com/marginal-gains.
  2. Ranker’s crowd-sourced list “The Greatest Scientific Breakthroughs of 2018,” https://www.ranker.com/list/scientific-breakthroughs-of-2018/ranker-science.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Spring and Well”, The Insightful Scientist Blog, August 27, 2019, https://insightfulscientist.com/blog/2019/spring-and-well.

 

[Page feature photoRadium Springs in Albany, Georgia in the United States.  Photo by Timothy L Brock on Unsplash.]

An Intangible Scheme

An Intangible Scheme

Why are categories so useful?  When we think about things, especially when we try to understand why things are the way they are, we often try to put things into categories.  We like to decide that certain elements fit categories A and B; we match certain processes to categories H and W; and then we conclude that the outcome turned out be a result in category Z.

This reliance on categories, on categorizing things or inventing new categories with which to label things, is something we do all the time.  I don’t have an answer as to why categories might actually be useful.  I don’t even have an answer as to why we believe categories are so useful.  But I do have some thoughts about why categories matter for scientific discovery.

 

Open Access Isn’t Universal Access

 

It all started when I was looking around for open access articles to read about scientific discovery.  I recently lost the paid subscription access I had to most journal articles.  So I had to switch over entirely to only open access articles (i.e., those that don’t live behind a paywall).  This led me to spend a week wandering around the internet looking for good quality free resources.

Finding free journal articles didn’t turn out to be the problem.  Finding relevant free journal articles did.

There is still no consistency to what peer-reviewed articles and pre-prints are freely available.  Sometimes an entire journal is open access.  Sometimes only articles the author paid to make open access are freely available.  And sometimes only articles the journal considers prestigious or very high impact are made freely available (as a form of advertising).  How much free access is the “right amount” of free access is an issue that both publishers and scientists continue to wrestle with.

So on one particular day, instead of looking for articles I needed and then checking to see if they were freely available, I spent a little time searching for free articles and then looking to see if they might be relevant.  I just needed to get a sense of what was out there.

It turned out to be time well spent because I came across some fascinating research in an area completely unknown to me: how to help kids with learning disabilities solve math word problems.

 

Learning Disabilities and Schemas

 

I’ve come across articles about how to improve student problem solving performance before.  I have worked in academia and, especially in physics, how to help students do better in class is a popular topic.

What was interesting about this research though is that I found it because I was actually looking up a definition of the word “schema.”

A schema, in psychology, is a way of mentally organizing and sorting information to help you make sense of the world based on previous experience.  We can have internal schemas about all sorts of things, like how to determine if you aced an interview, what’s appropriate behavior at a wedding, or why certain activities help you relax on vacation.

In early childhood mathematics education schemas have a more specialized meaning.  In that context, schemas refer to specific kinds of templates or recipes taught to children to allow them to solve word problems.

One of the earliest  and best known general math problem solving schemas was given by mathematician and educator George Pólya in his book How to Solve It (1945).  His math problem solving schema involves four steps: (1) clarify the problem, (2) create a plan to solve it, (3) execute the plan, and (4) check your solution.

In a 2011 article reviewing the literature on using schemas with children at risk of or with learning disabilities (both math and reading disabilities) author Sarah Powell talks about how specific (explicit and teacher-led) and lengthy (weeks to months) instruction on how to apply a schema to solve word problems can improve student performance.

I know you’re dying for me to get to the point and link this back to scientific discovery.  Here’s how that might work…

 

The Discovery is in the Transfer

 

Powell draws out of the research literature two key themes that were a lightbulb moment for my perspective on how to train yourself (or others) in scientific discovery skills.

The first key theme is that the schema training worked best when students were first asked to categorize the type of problem that needed to be solved.  For example, students were given word problems where they needed to add things together (“totaling” type problems), or subtract things (“comparison” type problems), or multiply things (such as “shopping list” type problems.  (We’re talking about 3rd and 4th graders in the U.S. education system, so just 8 to 10 year olds here.)

When students were just taught how to solve each type of problem using a schema, but not to identify problem types, they did well.  But when students were taught how to identify what type of problem they were dealing with and the corresponding schema, they did even better.  This was called the “schema-based instruction” approach.

The second theme Powell found in the literature is that this performance could be boosted even further if students were given explicit instruction on how to apply the schemas they already knew to novel problems.  By explicitly I mean that they were given specific guidance on how novel problems might differ from familiar problems and that students were also taught how to link novel problems to familiar problems and then apply the already known schemas for the familiar problems to these new problems.  This was called “schema-broadening instruction”, as in the students increased their breadth or broadened their ability to apply what they were already taught.

I think this is fascinating.  Do you see the echoes of working on a problem at discovery’s edge here?

Consider this:

As someone pursuing discovery you have almost undoubtedly been taught ways to solve known kinds of problems in your area of interest (or you may have taught yourself well-known methods by doing a lot of studying using the internet).  So, essentially, you are like students in the first theme — you have a set of schemas to solve certain kinds of problems.  These are problems with well-known answers that you already know can be solved.  And these are methods you already know work.

At discovery’s edge you now come to a problem that you don’t know how to solve (or you are trying to identify a previously unrecognized problem and point out that it needs solving).  You still have tried and true methods, but now you have no idea how to get those to work on your new problem.  Aspects of the new problem may or may not resemble your old problem.  And for a scientific discovery scale problem, you (or someone else) will have already tried all the known methods and shown they don’t work.

So you’re stuck in the second theme, the schema-broadening problem.  How do you get the methods you know to apply to a problem that’s new?

 

Schema is Just a Fancy Word for Category

 

I loved Powell’s article because I realized that a problem students with learning disabilities may have is the same one discoverers might face.  I mean that these two situations,  in spirit, are similar (and in no way mean to trivialize the nuances or differences between the two situations).

Once I realized this, it put into perspective why I myself had felt the need, when I first started working on how to improve the techniques of scientific discovery at the individual skill level, to jump in and start generating categories.  I had phases of scientific discovery (a way to put a process into categories); I was trying to compile strategies (categories for solving discovery obstacles); and I even spent a lot of time trying to find out what scholars were saying about the types of scientific discoveries (categories of discovery).

But then I second-guessed this approach, because I wasn’t quite sure why I thought it was so valuable.  Was it just habit?

In science, especially certain topics like geology, paleontology, and particle physics, we are prone to “reductionism,” the tendency to want to break everything down into the smallest parts and to assume the behavior of the whole can be precisely determined from knowledge of the parts.  But this is not true in many natural phenomena (known as “emergent” phenomena), where the behavior that results from the interactions of the smallest parts is highly sensitive to many factors, and cannot be reduced in this simple toy-model kind of way.  Nonetheless, reductionism tends to be a mental trap and blind spot to which many scientists fall prey (myself included).

But this idea of schemas, and our ability to call them up based on our mental association between a problem type and a particular schema, sort of summed up the implicit philosophy I was following:  If I could come up with types of problems related to achieving scientific discovery, and even types of scientific discoveries, then maybe I could identify a set of schemas to overcome those problems, and those schemas might be teachable.

In fact schemas are themselves just more categories, ways to put mental processes and beliefs into categories that we can use and implement at will.

Schema-broadening then is the crux of the problem as to why we don’t yet know how to “teach the skill of scientific discovery.”  We haven’t spent enough time thinking explicitly about why the schemas we have don’t apply to novel problems, or why we fail to recognize that a known schema can in fact solve a novel problem.  If we put more emphasis there, on studying how we transfer schemas from one problem to another, then maybe we can boost our ability to discover the undiscovered.

 

Building the Big Picture

 

The image that came to mind was of a vast and complicated mosaic.  This mosaic not only creates one large picture, but also contains within it many smaller pictures, set pieces within the larger world of the whole mosaic.  The information we gain through observation and experimentation are like the tiny tiles which need to be placed within the mosaic.  Our theories and hard earned insights are like the set pieces.  Nature herself is like the whole mosaic.  But schemas are like the unseen outlines that tell us where the tiles should be placed in order for the mosaic to be a reflection of the real world, instead of a fantastical mirage.

It’s that intangible scheme, lurking behind the finished whole, that deserves our attention as much as the finished mosaic itself.

 

Interesting Stuff Related to This Post

 

  1. Sarah R. Powell, “Solving Word Problems using Schemas: A Review of the Literature,” Learning Disabilities Research & Practice 26(2), pps. 94-108 (2011).  Open access version available here.
  2. George Polya, How to Solve It (1945).
  3. Liane Gabora, “Toward a Quantum Model of Humor”, Psychology Today online blog, Mindbloggling, April 6, 2017, https://www.psychologytoday.com/us/blog/mindbloggling/201704/toward-quantum-model-humor.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “An Intangible Scheme”, The Insightful Scientist Blog, August 25, 2019, https://insightfulscientist.com/blog/2019/an-intangible-scheme.

 

[Page feature photo: Mosaic commemorating the death of Beatles band member John Lennon in New York City’s Strawberry Fields, Central Park.  Photo by Jeremy Beck on Unsplash.]

The Ugly Truth

The Ugly Truth

I love a good mash-up, so let me ask you this: What do you get when you mash-up the ideas of two prolific female academics, one in social work and the other in theoretical physics?

The answer is: my musings for this week’s post, which boils down to the phrase “the ugly truth.”

 

A Tale of Two Academics

 

So which two academics am I talking about and which two ideas?  Here’s a quick rundown (I’ve included links to their webpages at the bottom of this post in case you want to follow-up):

__________________________

Brené Brown – Ph.D. in social work

 

Currently based at the University of Houston, Brown studies topics like the intersection between courage, vulnerability, and leadership.  She’s an academic researcher, a public speaker, and runs a non-profit that disseminates much of her work in the form of research-based tools and workshops.

__________________________

Sabine Hossenfelder – Ph.D. in physics

 

Currently based at the Frankfurt Institute for Advanced Studies, Hossenfelder studies topics like the foundations of physics and the intersection between philosophy, sociology, and science.  She’s an academic researcher, a public speaker, and writes pieces communicating science to the public as well as maintaining a blog well-known in physics circles.

__________________________

In the midst of simultaneously reading the most recent popular books published by these two researchers (Dare to Lead by Brown and Lost in Math by Hossenfelder), I was struck by a link between the two.  That link had to do with the premise of Hossenfelder’s book and one of the leadership skills Brown promotes in her book.

Both of these involve the word “beauty.”

 

Sabine Hossenfelder’s Lost in Math

 

Hossenfelder argues that physicists (in her case, especially taken to mean theoretical particle physicists and cosmologists) have been led astray by using the concept of “beauty” to guide theoretical decision-making as well as lobbying for which experiments to carry out to test those theories.  By “using” I mean that she illustrates through one-on-one interview snippets how theorists rely on beauty to help them make choices about what to pursue and what to pass by.  She also illustrates, through a review of the literature, how theoretical physicists have tried to define beauty both with words (like “simplicity” and “symmetry”) and with numbers (through concepts like “naturalness”, the belief that dimensionless numbers should be close to the value 1).

According to Hossenfelder, this beauty principle does not drive the theoretical effort among just a small few, but among the working many.  And she thinks it’s a problem.  Her main  reason for pointing the finger is the belief that this strategy has not yet produced any new successful theoretical results in the last few decades.

The best quote to sum up Hossenfelder’s book in my reading so far is this:

 

“The modern faith in beauty’s guidance [in physics] is, therefore, built on its use in the development of the standard model and general relativity; it is commonly rationalized as an experience value: they noticed it works, and it seems only prudent to continue using it.” (page 26)

 

Funny that Hossenfelder should mention values.  Values are something Brown talks about at length.

 

Brené Brown’s Dare to Lead

 

The crux of Brown’s book Dare to Lead is about acknowledging and leveraging qualities that make us human (vulnerability, empathy, values, courage) in a forthright, honest, and authentic way in order to become better leaders.  Brown illustrates her concepts with numerous organizational and individual leader case studies peppered throughout the book, as well as copious academic research from her team on this specific topic.

According to Brown, the prime cause of a lack of daring leadership is cautious leadership, best expressed through the metaphor of entering an arena fully clothed in heavy duty armor.  The energy put in to developing and carrying the armor takes away from the energy left to masterfully explore the arena.

Here, I’m most interested in her thoughts on values and the role they should play in daring leadership.

In case you’re wondering, Brown defines leadership as “anyone who takes responsibility for recognizing the potential in people and processes, and who has the courage to develop that potential.” (page 4)

(It’s the idea of developing potential, which resonates with scientific discovery, that caught my eye when I read the back cover of the book on a lay-over in Amsterdam.)

Brown traces much of our motives to our values: they drive our behavior and determine our comfort level when we take actions that either align with (causing us to feel purposeful or content) or run counter to (causing us to feel squeamish or guilty) our values.

The best quote to sum up Brown’s discussion of values is this one:

 

“More often than not, our values are what lead us to the arena door – we’re willing to do something uncomfortable and daring because of our beliefs.  And when we get in there and stumble or fall, we need our values to remind us why we went in, especially when we are facedown, covered in dust and sweat and blood.” ( page 186)

 

One last detail from Brown’s book will prime you for my mash-up:  On page 188 of her book, Brown gives a lengthy list of 100 plus items (derived in her research) from which to identify your core values.

The ninth word down on the list of values?  Beauty.

 

Beauty is Just Another Motive

 

So here’s where the mash-up begins.  And let me throw in one more element, just to make it fun.  Let me put this all in a metaphor, like something from a cheesy crime procedural TV show.  Ready to put two-and-two together and solve a mystery?

So, according to Hossenfelder a crime against physics has been committed (the failure to come up with something new in a timely fashion, after spending a lot of money trying to come up with something new).

Physicists have taken advantage of the means (applying beauty as a guiding principle) and the opportunity (being employed as physicists, exclusively at academic institutions in her examples) to commit this crime.

If you watch enough crime shows, you’ll know the overused phrase that TV detectives rely on.  Find the “means, motive, and opportunity” and you’ll find your criminal.

Hossenfelder has already singled out physicists as the perps.  But as a detective she would be at a loss for motive (other than maybe, “everybody else was doing it and I wanted to keep my job”).

Here, I imagine Brown chiming in as her spunky detective partner.  Hossenfelder has laid out her analytic,  but impersonal accounting, and now Brown swoops in to add the humane touch.  “No, no, Sabine,” Brown says.  “Beauty was not the means; it was the motive. The means was getting the research funding, the students, the equipment.  But the motive, well that’s just people being people: it was the pursuit of beauty they could call their own.

Okay, maybe melodrama and mash-ups don’t go together so great, but this is an interesting line of thought:

Brown’s work suggests that the pursuit of beauty as a methodological choice may not just be about expediency or experience, but also about personal fulfillment.  That’s deep stuff.  And if it’s true, then it throws the idea of changing tactics into a different category.

Then it means your changing the motive, not the means.  Beauty isn’t just about a guiding principle that might work, it’s about what you believe gives your work meaning when it does succeed.  And convincing someone to change their motive is a much taller order than convincing them to change their means.  Especially if their motives are values-driven (whether they realize it or not).

 

If You Can’t Be the Change You Want to See in the World then Bring the Change

 

Trying to constrain what motives are most likely to bring about scientific discovery seems to me like it might be a fool’s errand.

Odds are it’s about the right time, right place, and right motive, to put you in a position to recognize the undiscovered.  In Hossenfelder’s defense, I think she is unwilling to accept human motives (in an appendix she advises that you try to remove human bias completely) because she’s afraid it will undermine the ability to understand the truth (understanding and truth are numbers 109 and 108 on Brown’s values list).  But there’s more than one way to reach an outcome.  If our motives are driven by values and run deep, then instead of asking scientists to change motives, we could also just bring in more people with different motives and give them a seat at the table.  In that way you bring these alternatives by bringing in people who value those approaches and use them by default.

And Brown’s values list includes a lot of words that easily might be interesting alternative motives (or guiding model-building principles), like adaptability, balance, curiosity, efficiency, harmony, independence, knowledge, learning, legacy, nature, order, and simplicity (just to name a few).

In the spirit of a seat at the table of debate, Hossenfelder’s book offers a counter-value to the beauty principle in model-building (understanding and truth).  And Brown’s book offers a counter-value to the stoicism principle in leadership (courage and vulnerability, # 24 and # 113 on her values list).  These two researchers bring their own motives and values, serving as the bearers of not only alternative perspectives, but more importantly alternative actions that might help make progress.

[In case you’re wondering, my three core values, in priority order, are hope (# 52 on the list), respect (# 87), and affection (not on Brown’s list).  That may help clarify my motives for everything on The Insightful Scientist website.]

 

The Ugly Truth

 

You might wonder why I suggest giving more people with different values a seat at the table, your discussion round table.

Why not just try one set of values and if it doesn’t work then replace them with a new set of people with different values?  Or why not just try and change your values until you achieve success?

The tricky thing about values is that it’s hard to change them once their set, usually sometime in middle childhood.  The useful thing about values is that it’s also hard to put yourself in someone else’s shoes.  That lack of imagination, empathy, and sympathy usually turns into skepticism.  And skepticism done right can be tremendously helpful to science, especially when it comes to verifying possible discoveries.

If we can’t understand, or don’t agree with, someone else’s motives then we automatically want and need more data and evidence to agree with their conclusions.  We set the bar of proof higher when it’s an ugly truth (to us) than when it’s a beautiful explanation (to us).

For example, suppose one scientist believes that embracing complexity captures the wonder of nature by valuing diversity, while another scientist believes that simplicity captures the wonder of nature by valuing connection.  We may find that while one of these people thinks needing many models for specific cases has a greater feel of “truthiness” the other person believes that having as few models as possible means you’re on the right track.  The gap between these two approaches must be bridged because at the end of the day scientific discovery is about consensus converging to a base set of truths through observation and evidence.  Filling the gaps between scientific findings and their associated motives ensures that science has a more solid foundation.  And, in our example, we may find that while at one time resources make simplicity the better strategy, at another time complexity may be just the thing for a breakthrough.

Conflicting values and the guiding principles they generate in scientific work are like unfamiliar or misshapen vegetables usually hidden from view.  It’s going to take more convincing to put money into it by buying it, to invest effort in it by cooking it, and to be willing to internalize it by swallowing it.  You’d maybe rather just ignore it or toss it in the garbage.  But you never know, one person’s ugly truth may turn out to be another person’s satisfying ending.  If we don’t all sit down and share a meal together, how will we find out?

 

Interesting Stuff Related to This Post

 

  1. Website – Brené Brown’s homepage
  2. Website – Sabine Hossenfelder’s blog Backreaction
  3. Elisabeth Braw, “Misshapen fruit and vegetables: what is the business case?”, The Guardian (online), September 3, 2013, https://www.theguardian.com/sustainable-business/misshapen-fruit-vegetables-business-case.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “The Ugly Truth”, The Insightful Scientist Blog, August 9, 2019, https://insightfulscientist.com/blog/2019/the-ugly-truth.

 

[Page feature photo:  A pretty, pert bunch of Laotian purple-striped eggplants, roughly the size of ping pong balls. Photo by Peter Hershey on Unsplash.]

Don’t Curate the Data

Don’t Curate the Data

It’s tempting when we talk to others about our ideas to only want to share the good stuff.  To only share the things we think are logical, sound reasonable, maybe only the things we think (or hope) will make us seem smart and focused.  But this tendency to re-frame our real experiences and distill them into nice little stories we can tell people over coffee or a beer can be a dangerous setback to getting better at a new skill set.

 

Trying Too Hard to Look Good

 

Why?  Because sometimes we are so busy trying to think about how to tell (or should I say sell) others on what we’re doing or thinking that we scrub our memories clean of the actual messy chain of events that led us to come up with the polished version.  That messy chain, and every twist, turn, and chink in its construction, is the raw knowledge from which we can learn about how we, or others, actually accomplish things.  I’ll call it “the data.”

So this fear of how others will perceive our process is one thing that gets in the way of having good data about our process.  We start to curate the data to make ourselves more acceptable to others.

But we need this data to gain a meaningful awareness of what we actually do to produce a certain outcome.  This is even more important when we try to figure out how to reproduce a mental outcome.

Maybe you came up with a winning idea once, but now you’re not sure how to get the magic back.  Or maybe you want to pass your strategy on to a younger colleague or friend, but don’t really know what you did.  Maybe you’re hoping to learn from someone else who succeed at thinking up a breakthrough solution, but they say “I really don’t remember what I did.  It just sort of came together.”

Which brings us to a second thing that works against having access to good data about our own interior processes and patterns.  Memory.

 

Mining Memory is a Tricky Business

 

We all know we don’t have good memories, even when we are trying hard (studying for tests in school, or trying to remember the name of every person in a group of ten new people you just met are classic examples).  Memory is imperfect (we have weird, uncontrollable gaps in what we retain).  Memory is selective (we have a tendency to be really good at remembering what happened during highly emotional events, but not during more mundane or routine moments).  Memory is pliable (the more we tell and retell a version of something that happened to us, the more likely we are to lose the actual memory in place of our story version).

These tricks of memory not only frustrate us when we try to observe and learn from ourselves, but also when we try to learn from others.

There have been lots of interviews with famous scientists who made discoveries asking them about how they did it.  But their self-reported stories are notoriously unreliable or have big gaps because they, like us, are subject to the fickle whims of memory and the hazards of trying to tell your own biography one too many times.  Mining memory for useful insights is a tricky business.

So memory and lack of awareness (or mindlessness) cause us to lose access to the precious data we need to be able to see our behaviors and patterns from a larger perspective in order to learn from them and share them.

When I first started learning about scientific discovery, recognizing these pitfalls of bad memory and mindlessness caused me a lot of annoyance.  I would think of a great example of a scientific discovery, such as a discovery that shared similarities with an area or question I wanted to make discoveries in.  I’d think, “Perfect!  I’ll go read up on how they did it, how they discovered it.  What were they reading, what were they doing, who were they talking to?”  But of course, answers to those questions wouldn’t exist!

Maybe the discovery was of limited interest so nobody bothered to ask those questions and now the discoverer had passed away.  Or maybe the discovery was huge and world changing but the histories told about it tended to re-hash the same packaged myths—like Newton and the apple falling inspiring ideas about gravity, or Einstein taking apart watches from an early age leading to picturing little clocks when working out the effects on time of traveling near light speed in special relativity.  Part fact, part fiction, these stories leave hundreds of hours of more mundane moments, links in the mental chain, unilluminated.  Good data that could guide future generations gets lost, sacrificed on the altar of telling a whimsical story.

So when I sat down in September of 2018 to start trying to work out a more modern definition of scientific discovery—something pragmatic that you could use to figure out what to do during all those mundane moments—I kept thinking about how to better capture that process of obtaining insights, as you go.

That’s when I realized we already have the methods the problem is we always want to curate the story told after the fact.  And rather than curating the data that make it into the story (i.e., creating an executive summary and redacting some things), we end up actually curating the source data itself (i.e., never gathering the evidence in the first place).  In other words, rather than just leaving out parts of the story, we actually tune out to parts of the story as we are living it, so that we literally lose the memory of what happened all together.

But that story is the raw data that fields like metascience and the “science of science” need to help figure out how scientists can do what they do, only better.  And as scientists we should always be the expert on our own individual scientific processes.  The best way to do that is to start capturing the data about how you actually move through the research process, especially during conceptual and thinking phases.  Capture the data, don’t curate the data.

 

A Series of Events

 

Let me give you a real life example to illustrate.  As I said, I sat down to try to come up with a new definition of scientific discovery.  I’m a physicist by training.  Defining concepts is more a philosopher’s job, so at first I had a hard time taking myself and any ideas I had seriously.  I got nowhere for three months; no new ideas other than what I had already read. Then one day a series of events started that went like this:

I read a philosophy paper defining scientific discovery that made me very unhappy.  It was so different than my expectation of what a good and useful definition would be that I was grumpy.  I got frustrated and set the whole thing aside.  I questioned why I was studying the topic at all.  Maybe I should stick to my calling and passion, physics.  I read when I’m grumpy, in order to get happy.  So I searched Amazon.  I came across a book by Cal Newport called So Good They Can’t Ignore You.  It argued that passion is a bad reason to pursue a career path, which made me even grumpier; so grumpy I had to buy the book in order to be able to read it and prove to myself just how rightfully disgruntled I was with the premise.

Newport stresses the idea of “craftsmanship” throughout his book.  I was (and still am) annoyed by the book’s premise and not sold on its arguments, but “craftsmanship” is a pretty word.  That resonated with me.  I wanted to feel a sense of craftsmanship about the definition of scientific discovery I was creating and about the act of scientific discovery itself.

I didn’t want to read anymore after Newport.  So I switched to watching Netflix.  By random chance I had watched a Marie Kondo tidying reality series on Netflix.  Soon after, Netflix’s algorithm popped up a suggestion for another reality series called “Abstract: The Art of Design.”  It was a series of episodes with designers in different fields, like architects, Nike shoe designers, theater and popstar stage shows set designers, etc.  It pitched the series as a behind the scenes look at how masters plied their craft.  Aha, craftsmanship again!  What coincidence.  I was all over it (this was binge watching for research, not boredom, I told myself).  I was particularly captivated by one episode about a German graphic designer, Christoph Niemann, who played with Legos, and whose work has graced the cover of The New Yorker more than almost any other artist.  The episode mentioned a documentary called “Jiro Dreams of Sushi.”

Stick with me.  Do you see where this is going yet?  Good, neither did I at the time.

So I hopped over to Amazon Prime Video to rent “Jiro Dreams of Sushi” about a Japanese Michelin rated chef and his lifelong obsessive, perfectionist, work ethic regarding the craft of sushi.  At one point the documentary showed a clip of Jiro being named for his Michelin star and they mentioned what the stars represent: quality, consistency, and originality.  Lightbulb moment!  Something about the ring of three words that summed up a seemingly undefinable craft (the art of creating delicious food) felt like exactly the template I needed to define the seemingly undefinable art of creating new knowledge about the natural world.

So I started trying to come up with three words that summed up “scientific discovery”.  Words that a craftsman could use to focus on elements and techniques designed to improve their discovery craft ability.  There were more seemingly mundane and off-tangent moments over a few more months before I came up with the core three keywords that are the basis of the definition I am writing up in a paper now.

The definition is highly unique, with each term getting its own clear sub-definition that helps lay out a way to critically examine a piece of research and evaluate it for its “discovery-ness”, i.e., its discovery potential or significance.  It’s also possible to quantify the definition in order to try and rank research ideas relative to one another for their discovery level (minor to major discovery).

It’s a lot better idea than some of the lame generic phrases that I came up with in the early days, like “scientific discovery is solving an unrecognized problem ” (*groan*).

On an unrelated track at that time, I was reading Susan Hubbuch’s book, Writing Research Papers Across the Curriculum, and had come across her idea that you create a good written thesis statement by writing out the statement in one sentence and then defining each keyword in your statement using the prompt “By <keyword> I mean…”.  So then I took the three keywords I had come up with and started drafting (dare I say crafting?) their definitions in order to clarify my new conception of “what is scientific discovery?”

So that’s the flow…my chain of discovery data:

Reading an academic paper led to disgust; disgust led to impulse spending; impulse spending brought in a book that planted the idea of craftsmanship; craftsmanship led to binge-watching; binge-watching led to hearing a nice definition of something unrelated; the nice definition inspired a template for how to define things; and simultaneously reading a textbook suggested how to tweak the template to get a unique working definition down on paper.

How do I know all this?  I wrote it down!  On scraps of paper, on sticky notes, in spiral notebooks, in Moleskines, in Google Keep lists, Evernote notes, and One Note notes (I was going through an indecisive phase about what capture methods to use for ideas).

I learned to not just write down random thoughts, but also to jot down what inspired the thought, i.e., what was I doing at the moment the thought struck—reading something, watching something, eating something, sitting somewhere, half-heartedly listening to someone over the phone…(Sorry, Mom!)?  Those are realistic data points about my own insight process that I can use later to learn better ways to trigger ideas. (And, no, my new strategy is not just to watch more Netflix.)

 

Make a Much Grander Palace of Knowledge

 

Instead of trying to leave those messy, mundane, and seemingly random instigators out, I made them part of my research documentation and noted them the way a chemist would note concentrations and temperatures, a physicist energies and momenta, a sociologist ages and regions.

And then I promised myself I wouldn’t curate the data.  I wouldn’t judge whether or not impulse book buying is a great way to get back on track with a research idea, or whether or not Marie Kondo greeting people’s homes with a cute little ritual is a logical method of arriving at a template to devise operational definitions.  I wouldn’t drop those moments from memory, or my records of the research, in order to try and polish the story of how the research happened.  I’ll just note it all down.  Keep it to review.  And maybe share it with others (mission accomplished).

Don’t curate the data, just capture the data.   Curation is best left to analysis, interpretation, and drawing conclusions, which require us to make choices—to highlight some data and ignore other data, to create links between some data and break connections among other data.  But think how much richer the world will be if we stop trying to just tell stories with the data we take and start sharing stories about how the data came to be.  The museum of knowledge will become a much grander palace.  And we might better appreciate the reality of what it is like to whole-heartedly live life as a discoverer.

 

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Don’t Curate the Data”, The Insightful Scientist Blog, August 2, 2019, https://insightfulscientist.com/blog/2019/dont-curate-the-data.

 

 

[Page Feature Photo: The gold dome in the Real Alcazar, the oldest used palace in Europe, located in Seville, Spain. Photo by Akshay Nanavati on Unsplash.]

Be a Person of Many Hats

Be a Person of Many Hats

When someone asks you what you do for a living, how do you answer?

Do you give your job title?  Do you say what kinds of project(s) you are working on?  Do you give your company name or name of the topic you work on?

From researchers of all stripes, working in non-profits, volunteer and hobby groups, schools, universities, industry, and government, you hear many answers.  But when scientists get together I’ve noticed people tend to label themselves as one of four “flavors” of scientist: as an experimentalist, a theorist, a computationalist, or a citizen scientist (sometimes called a “hobbyist” or “amateur scientist”).

Often times, scientists will use these labels when they get nervous about having to answer questions.  If you listen to or watch videos of a lot of science talks for scientists, you might have noticed this too.

I’ll give you a few examples from physics:

If someone is asked an intensely mathematical question they might say “I’m just an experimentalist, so that’s above my paygrade.”  If someone is asked to defend the possibility of building a real prototype they might say, “Oh I’m just a theorist, so I don’t know about building things, I can just tell you the physics is there.”  If an audience member asks a question that gets a dismissive response from a speaker, they might say “I was just curious.  I follow the topic as a hobby, but I don’t really keep up with the details.”

Lately, as I’ve started studying connections between researching fundamental physics and the science of scientific discovery, I’ve been asked many times, “What would you call yourself?”, “How should I introduce you to people?”, or “What would you say you do?”

Which got me thinking about how we see ourselves as scientists.  And I’ve started to wonder if using labels as personal identities might be hurting our attempts to actually discover things.

 

Finding the Third Way

 

So, “experimentalist”, “theorist”, “computationalist”, and “citizen scientist”.  First off, I should define what I mean by these words:

“Experimentalists” conduct laboratory experiments to gather new data and generate equations to describe data they’ve collected.

“Theorists” look through old, new, and especially anomalous data to invent new descriptions and equations to explain the misunderstood and to predict the unobserved.

“Computationalists” run large-scale precision calculations on computers to simulate meaningful phenomena and generate equations to capture the real world in a form they can put on computer.

“Citizen scientists” conduct projects to satisfy their curiosity and support their community and generate equations for joyful distraction or to improve the quality of life of a group they care about.

I think these labels apply to any scientific field—agriculture, psychology, geology, chemistry, physics, computer science, engineering, economics, you name it.  And I emphasize equations because I think that’s what distinguishes the fine arts (literature, music, art, dance, etc.) from the sciences.  The sciences try to represent Nature using numbers, language, and symbolic math, while the fine arts try to represent Nature using sound, light, movement, color, texture, and shape.

Like I said in the opening of this post, I certainly see people use these words to navigate tricky audience questions.  But I also think they get used in two other ways, depending on what kinds of scientific discoveries people are pursuing: longstanding problems in mature fields, or unrecognized opportunities in emerging fields.

 

Work Identity

 

In mature fields, the kinds with lots of funding and famous teams that people can name off the top of their head, I think three of these four labels (experimentalist, theorist, computationalist) are used by scientists and that they mean them as a sort of personal identity.  That’s because mature fields tend to have larger networks of people working in them.  With larger networks comes more specialization (to help manage the large volume of people and ideas).  People get assigned to roles and they develop expertise in that particular role over the course of their work career.

In mature fields even training tends to start labeling people early.  For example, at my current institution undergraduates in their first year are already assigned to “Physics Theory” track (which requires fewer lab hours and more math) versus “Physics” track (which requires more lab hours and less math).  And in the United States at the Ph.D. level students are divided into either experimental or theoretical tracks.  Computational folks usually fall into one or the other track as a sub-category, depending on whether or not they mainly work on simulations for large experimental collaborations, or simulations for a small (maybe five people or less) theoretical group.

Meanwhile, the pursuit of scientific discovery in mature fields tends to take the form of trying to answer longstanding open questions.  The kind that make headlines in popular science journals.  In physics these are things like the nature of the early universe or why the universe has more matter than antimatter.

When individual scientists choose to see labels like experimentalist, theorist, or computationalist as work identities, they engage with discovery in more limited ways.  They do so only to the extent that the field at large has decided they should have a role in it.

So, for example, if anomalous data is generated by an experimental group, but the field decides that it’s most likely an experimental error causing the blip, then computationalists and theorists will be discouraged from contributing to the discussion, or will suffer a hit to their credibility if they join the debate.

 

Stay in your lane.

 

Work identities are kind of like a rule that says, “Stay in your lane.”  But if the key finding is to be found by taking an off-ramp, then progress will be slow or non-existent because there’s not enough freedom of intellectual movement.

Also, I mentioned at the beginning that only three of the four labels appear in mature fields.  There’s rarely any place given to the voices of citizen scientists or hobbyists at all.

 

Work Ethic

 

On the flip side, there are emerging fields and topics.  These areas are so new that very few people are actually studying them, no rules have been established yet, and even the kinds of discoveries being pursued are hard to define.  Emerging fields are uncharted territory so anything is possible.

With so few people working on them, emerging topics don’t need hierarchies, they just need bodies willing to do the work.

So an experimentalist will be someone who values running a huge amount of tireless trial and error.  A theorist is someone who values digging around to think up reasons, and ideas, and questions.  In emerging fields you are more likely to be dismissed by co-workers until the value of the project proves itself and gains more acceptance in the mainstream. So taking on a hobbyist work ethic becomes more important as you have to value things like “passion” and “obsession” to keep people motivated through the tough times.  And a computationalist is someone who values grinding through data on computer until all those numbers start to look like a pattern.

 

Mindset over matter

 

So in science, I think that means the labels we usually think of as identities in mature fields become a kind of work ethic in emerging fields; a style of taking on each and every task to bootstrap your way to a successful breakthrough.  They are not so much you, as they are the mindset you approach them with.

This mindset over matter approach is what allows researchers in emerging fields to pursue high-risk opportunities that may lead to scientific discoveries, or may prove to be dead ends.

But this still puts the brakes on the speed with which discoveries could be made, because I think researchers still feel like they have to find people who either innately have that mindset, were raised with that mindset, or have acquired that mindset by experience or training.

In other words, in both mature and emerging fields these labels are seen as compartmentalized rather than fused—you can own one, but not the others.

 

Troubleshooting Approach

 

That brings me back to the cryptic header I started this post with, “Finding the Third Way”.  I think of this as “finding the middle way”.  To me that means using these labels as skillsets and thinking of the whole pursuit of scientific discovery as a troubleshooting exercise.

The trouble might be that you’re bored and you want something interesting to do with your weekends, so you’re going to volunteer as a citizen scientist to contribute to research on soil health in your local area…just because you love veggies.

Or the trouble might be that you’re tired of having patients die on your watch from a preventable condition, so you’re going to raise money to run experiments on cheap lifestyle interventions to reduce the number of deaths.

Or the trouble might be that you think nuclear weapons are dangerous, but there’s all this plutonium sitting around in stockpiles with no safe, permanent way to get rid of it, so you’re going to dig into all the theories on how to dispose of anything that might give you a breakthrough idea to help solve the problem.

My point is that we solve problems that matter to us.  Personal problems, social problems, global problems.  But the problems are what matter most, not the fields.  Scientific discoveries are often made because their discoverers saw a problem that they couldn’t let go of and so they worked until they found a way to solve it.

These aren’t abstract, philosophical things.  They are practical, specific challenges that we tackle one troubleshooting step at a time.  And over the course of solving that problem, every one on of the roles I’ve mentioned will probably come into play.

So instead of always looking, or waiting, or hoping that we can involve someone willing to take on “the experimentalist”, or “the theorist”, or “the computationalist”, or “the citizen scientist” responsibilities, we should consider building up a reserve of each of those things within ourselves.

 

Moving Beyond Our Training

 

If we want to give ourselves the best chance of solving a problem that matters to us and discovering something along the way, then maybe we shouldn’t be just one of those things (experimentalist, theorist, computationalist, hobbyist) in our lifetime.

Maybe we should be all of those things at one time or another.

They’re just skills.  Not destiny.

Like the logo for my website says, “Discovery awaits the mind that pursues it.”  And I chose “The Insightful Scientist” as my websites name for reason.  Because I wanted to remind myself every day that I pull open the homepage that science is about discovery and that science is bigger than just physics or just the ways I was trained to pursue discovery as a theoretical physicist.

We use our training as far as it will take us. But if the science is bigger than our training then we don’t give up, or say it’s a job for someone else.  We just stretch our minds a little wider open, learn a new skill, and jump once more into the fray.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Be a person of many hats.

So when people ask me what I am or what I do I think I’ll start saying:

“I’m a Bernadette.”

“And it just so happens that the problem I’m trying to solve right now is how to put the science of scientific discovery into practice in neutrino particle physics.”

I won’t label myself as a theorist, or a neutrino physicist, or an academic.  Because the titles don’t matter.  The problems we’re trying to solve do.

There’s an English expression that says taking on different roles at work is like wearing different hats.  Well, I’m willing to wear whatever hat gets the problem solved, even if I don’t look good in fedoras.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I narrowed down the labels we use for scientists to four: experimentalist, theorist, computationalist, and citizen scientist.
  • I classified scientific discovery into two types: trying to answer longstanding questions in old fields and recognizing new opportunities in young fields.
  • I argued that we use the four labels as identities or work ethics; but that a more agile approach is to think of them as skillsets.

Have your own thoughts on how we label ourselves as researchers and whether or not this helps or hinders the pursuit of scientific discovery?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Website – Chandra Clarke’s Citizen Science Center, sharing open science projects.
  2. Web article – Angus Harrison, “Self-taught rocket scientist Steve Bennett is on a mission to make space travel safe and affordable for all – from an industrial estate in Greater Manchester,” interview in The Guardian online, April 4, 2019, https://www.theguardian.com/science/2019/apr/04/building-rockets-all-over-house-space-travel-safe-affordable-for-all.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Experimentalist, Theorist, Computationalist, Citizen Scientist: Work Identity or Work Ethic?”, The Insightful Scientist Blog, March 29, 2019, https://insightfulscientist.com/blog/2019/be-a-person-of-many-hats.

 

[Page Feature PhotoFedoras fill a costume rack at the Warner Brothers movie studio in Burbank, California.  Photo by JOSHUA COLEMAN on Unsplash.]

Misfits Matter

Misfits Matter

How to use the trial and error method to make a scientific discovery.

 

I like moving, exploring new places, and visiting friends and family (for short manageable doses).  I can put up with traveling for work.  But one thing never ceases to annoy me:  Whenever I take a shower for the first time in a new place, I can’t for the life of me get the knobs, handles, and faucets to work right the first time.  I spend at least five minutes trying to get the water to stop being boiling or freezing, or trying to get the dribble out of the shower head to be decent enough to rinse.  Maybe you can relate.

But I’ll bet it never occurred to you that how you solve this problem is really scientific discovery skills in action:  you start fiddling with all the water controls you can see.

That’s because it’s is a classic example of doing the right kind of trial and error.  So I’ll use it to outline what I think are four key dimensions that help structure trial and error for discovery:

  1. Putting in the right number of trials
  2. Putting in the right kinds of trials
  3. Putting in the right kind of error
  4. Putting in the right amount of error

The overall theme here is this — it ain’t called “trial and success” for a reason.  The errors are part of the magic…that special sauce…the je ne sais quoi…that makes the process work.

You may have seen versions of this idea in current business-speak around innovation and start-ups (the Lean build-measure-learn cycle anyone?).  But I needed to take it out of the entrepreneurial context and put it into a science one.

So let’s get down to brass tacks and talk about important aspects of trial and error.

 

4 Goals for Thoughtful “Trial and Error”

 

I’m going to keep the shower faucet analogy going because it’s straightforward to imagine hitting the goals for each dimension.  But to give this a fuller scientific discovery context I’ll add one technical example at the end of the post.

 

Dimension #1 — On putting the right number of trials into your trial and error.

 

Goal:

Keep running trials until you gain at least one valued action-outcome insight.

 

When you start out on a round of trial and error you are really aiming for complete understanding and the skill to make it happen on demand, with fine control.

In our shower analogy, that means it’s not just enough to know how to get water to come out of the spout.  You need to be able to control the water temperature, the water pressure, and make sure it comes out of the shower head and not the tub spout (if there is one).  Ideally, you’d learn enough to be able to manipulate the handles to produce a range of outcomes:  the temperature sweet spot for a summer day shower or a winter one; the right pressure for too much soap with soft water or for sore skin from the flu.

So one of the first things you have to figure out is: how do you know when to stop making trials?

This isn’t a technical post about conducting blind trials or sample surveys.  Here we’re talking about a more qualitative definition of done; the kind of thing you might try for an “exploratory study”.  Exploratory studies are the kind where you have no hypothesis going in.  Instead, you’re trying to find your way toward an unknown valued insight, not trying to prove or disprove a previous hypothetical insight.

The whole point of trial and error is to take a bunch of actions that will teach you how to create desired results by showing you what works (called “fits”), what doesn’t work (called “misfits”), and forcing you to learn why.

The “why” is the valued insight you’re after.

If you’ve run enough trials to figure out how to make something happen, that’s good, but not enough.  For scientific discovery you need to know precisely why and precisely how it works.

So keep running trials until you’ve come up with an answer to at least one why question.

 

Dimension #2 — On putting the right kinds of trials into your trial and error.

 

Goal:

Try a mixture of fits and misfits.

 

A key facet of trial and error is that by intentionally generating mistakes it will help create insight into how to generate success.

Partly, these trials are about firsthand experience.  Your job is to move from “wrong-headed” ideas to “right-tried” experiences.  To make changes to how you operate you have to clearly label and identify two things in your trial and error scenario–“actions I can take” and “results I want to control”.

Good trial and error means that you will: (1) learn the range of actions allowed; (2) try every possible major action to confirm what’s possible and what’s not; and (3) learn from experience which actions produce what outcomes.

In the last section I brought up the terms fit and misfit: in some science work, getting a match between an equation you are trying and the data is called a “fit” and getting a mismatch between the two is called a “misfit”.

So in science terms, that means you want your trials to be a mixture of things you learn will work (fits), things you learn won’t work (misfits), and, if possible, things where you have no idea what will happen (surprises).

For my shower analogy, let’s use a concrete example: the shower in my second bathroom, which both my mom and aunt have had to use (and, rightfully, complained about).

A photo of the handles that control the shower in my guest bathroom in my UK apartment.

So, for “actions I can take”: rotate left handle, rotate right handle, or pull the lever on the left handle.  And for “results I want to control”: the water temperature and the amount of water coming out of the shower head.

Then, I start moving handles and levers individually.  Every time I move a handle and don’t get the outcome I want, it’s a mistake.  But I’m doing it intentionally, so that I can learn what all the levers do.

Many of these attempts will be misfits, producing no shower at all or cold water or whatever.  Some may accidentally be fits.  Hopefully, none will produce surprises (though I have had brown water and sludge come out of faucets before).

I think this visceral experience is what allows your mind to stop rationalizing why standard approaches and methods should work and get on with seriously seeking out new and novel alternatives that actually work.

And these new and novel alternatives, with their associated insights, are the soul of scientific discovery.

So you want to move into this open-minded, curious, active participant and observer state as quickly as possible and trying fits and misfits will help you do that.

 

Dimension #3 — On putting the right kind of error into your trial and error.

 

Goal:

Make both extreme and incremental mistakes.

You know the actions you can take.  But you need to figure out why certain actions lead to certain results.

One great way to do this is to try the extreme of each action.

If it’s safe (or you have a reasonable expectation of safety) then pull the lever to the max, rotate the faucet handle all the way, cut out almost everything you thought was necessary, and see what happens.

In physics, this goes by the name “easy cases”.  What we really mean is use the extreme values, zero, negative infinity, or positive infinity.  Plug them in to your model and see what happens.  Does it break things?  Does it give wonky answers?  Does it lead to a scenario where the role of one term in the equation becomes clearer?

That’s the beauty of extreme tests when you’re doing trial and error.  They let you crank up the volume on factors so that you can pinpoint what they might do, how they might operate in your context.

So what about making “incremental” mistakes?  Just nudging things a little this way and a little that way to see what happens?

These are absolutely necessary too, and tend to happen later on in your trial and error process.  They are a great way to confirm and refine your understanding.

If you want to boil it down, making mistakes at the extreme ends of the action cycle hones your “this-does-that” knowledge, while making mistakes in small incremental steps helps clarify “how” knowledge.

So, often times, it’s best to go after extreme cases in the early trials and then move toward incremental cases later on.  For example, with the shower handles, early on you’ll probably try rotating one handle all the way to the right or left to figure out which direction brings hot water.  Later on, you’ll turn the handle a little bit at a time, until you get the right temperature.

 

Dimension #4 — On putting the right amount of error into your trial and error.

 

Goal:

Make mistakes until you can link all major actions with outcomes.

 

This one is easy enough to grasp.  To put it more bluntly: how many times should you mess up on purpose?

The goal statement says it all: make enough mistakes that you can link all major actions with outcomes in your mind, and you know why they are linked the way they are.

Just imagine if you were told that every move you made to try and set a shower, where you didn’t know the knobs at all, had to only be moving toward the right outcome (no errors allowed).  How the heck would you succeed?  You would have to look up a manual, or find someone who had used the shower before.  It would probably slow the process down to a painstaking pace.  It would stress you out.  And it would need pre-existing insight into how to do it right.

But in discovery, you won’t have that kind of prior insight.  No one does.  So you have to be willing to gets things wrong in order to start to generate that insight.

So keep getting it wrong in your trials until you really get why it doesn’t work.  Don’t avoid those misfit moments.  You should be able to make a table or a mind map of links between actions and outcomes.  If you can’t, keep making errors until you can.

 

The Four Trial and Error Dimensions in a Real Physics Research Example

 

I promised I would connect the ideas I’ve talked about to a science example, so let me do that:

For my Ph.D. neutrino physics work, at one point I had to write a piece of computer code that could reproduce a final plot and numbers in an already published paper, by the MINOS neutrino oscillation experiment, to make sure our code modeled the experiment well.  First, I wrote some code (to estimate the total number of neutrino particles we predicted this experiment to see at a certain energies) based on how my research group had always done it.  Then I wrote down in my research notebook how the existing code had previously been tweaked to produce a good match.  One value had been hand-set, by trial and error, to fit.

In the newer data published at the time, we knew this tweak no longer worked.  But at first I just tried it anyway (try misfits).  Then I started changing the values in the code (make incremental changes).  And we added a few new parameters that we could adjust and I altered those values (try unknowns).  I kept detailed hand lists of the results of my changes on the final output numbers (link actions to outcomes).

Then I synthesized these behaviors into new groupings: did it make the results too big, too small, by a little, by a lot?  Did it skew all the results or just the results at certain energies?  Was it a consistent overall effect, or some weird pattern effect?

At this point I kept many code versions to be able to have a record of the progression of my trials (fancy versioning software isn’t commonly used in small physics groups).

A screenshot showing some of the folders and files from my Ph.D. computer codes that required trial and error.

And I did handwritten notes where I worked through why certain outcomes weren’t produced and others were (try until you get insight).

[3d-flip-book mode=”fullscreen” urlparam=”fb3d-page” id=”1206″ title=”false”]

 

Then I did it again.  And again.  And we did it for 10 more experiments totaling…well, a LOT of code.

In the end we got a good match and we were able to use it to complete my Ph.D. work, which explored the impact of a mathematical symmetry on our current picture of the neutrino particle.

So, trial and error, being able to willfully make mistakes to gain insight, can be incredibly powerful and remains a uniquely human skill.

As a 2011 study from Nature suggested, non-expert video gamers (i.e., many with no education in the topic beyond high school level biology) out-predicted a world-leading machine algorithm, designed by expert academic biochemists and computer scientists, in coming up with correct 3-D protein shapes, because they made mistakes on purpose while generating intermediate trial solutions.

Algorithms, by design, are constrained to do only one thing: get a better answer than they had before.  Every step must be forward; even temporary small failures are not allowed.

But we’re messy humans.

We can take two steps back for every one step forward, or even cartwheel off to the side when the rules say only walking is allowed.  Our ability to strategically move in “the wrong direction” (briefly taking us farther away from a goal) in order to open up options that in the long-run will move us in “the right direction” (nearer the goal) is part of our human charm and innate discovery capacity.  But that requires we acknowledge up front that in pursuit of discovery many trials will be needed, and many of them will not succeed.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Misfits matter.

Using trial and error in a conscious, structured way can move use from having thoughts on something to experiences in something.  Notice how “thoughts on” speaks to the surface, like a tiny boat on a broad ocean; while “experiences in”, speaks to the depths, like a diver in deep water. So try.  And err.  Welcome error by remembering that misfits matter and that a deep perspective is where radical insight awaits.  In taking two steps back for every one step forward, those two steps back aren’t setbacks, they’re perspective.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I shared the four dimensions that help define strategic trial and error: putting in the right kind and number of trials, and putting in the right kind and amount of error.
  • I shared an example of how trial and error has been used in my own physics work and in biology to get useful insights.

Have your own recipe or experiences related to trial and error?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – “Insight”, Wikipedia entry, https://en.m.wikipedia.org/wiki/Insight.
  2. Web article – Ed Yong, “Foldit – tapping the wisdom of computer gamers to solve tough scientific puzzles” Discover magazine website, Not Exactly Rocket Science Blog, August 4, 2010, http://blogs.discovermagazine.com/notrocketscience/2010/08/04/foldit-tapping-the-wisdom-of-computer-gamers-to-solve-tough-scientific-puzzles/#.XKPkLaZ7kWo.
  3. Website – MINOS neutrino oscillation experiment, http://www-numi.fnal.gov/.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Putting the Error in Trial and Error”, The Insightful Scientist Blog, March 22, 2019, https://insightfulscientist.com/blog/2019/misfits-matter.

 

[Page Feature PhotoAn ornate faucet at the Hotel Royal in Aarhus, Denmark. Photo by Kirsten Marie Ebbesen on Unsplash.]

Awaken Sleeping Giants

Awaken Sleeping Giants

Tell me if this sounds familiar to you:

You have a lightbulb moment.

A great idea you’ve never seen or heard before.  It seems like it could really move things in an amazing new direction.  You’re excited.  No, SUPER excited.  You deluge your friends and family with all the amazing, awesome outcomes your idea could have.

Once that first flush of excitement passes and the adrenaline from having had a genius moment settles you maybe start to look around for useful info on parts of your idea outside your knowledge base.  And that’s when it happens.  You come across a paper, a talk, a website, a colleague in conversation, where they discuss something painfully close to your supposedly “novel” idea.

The idea’s already been done.

To add salt in the wound, as you dig more you find out that “the idea”, what you thought was “your idea”, was tested out by some genius years ago.  And they’ve already written about it, or tried it out, and moved on.

*sound of your ego and hope deflating here*

In my case, the “offending paper” was written before I was even born.  That’s four decades old!  I never even stood a chance of getting the first idea on that table.

So what does any of this have to do with old papers that have low citation rates?  In other words, ideas that have been out there for a while, but nobody seems to care or talk about?

 

Deciding if the Old Paper in Your Reading Pile Should Still Be There

 

Well, as a matter of fact, the paper in my example was exactly that kind of paper—it had vanished into history like an unliked and unshared tweet or Facebook post.

But if you read my Research Spotlight summary (link at the end of this post) on a Nature paper about “Team Size and ‘Disruptive’ Science” you would have learned that researchers recently discovered a link between teams that publish more “disruptive” scientific papers, patents, or computer code and the research papers they cite:  Teams proposing new ideas more often cited old unpopular papers.  By unpopular I mean those old papers weren’t cited very often, ever.

It turns out that the paper that proposed the same idea I had was an old paper (well, older than I am) and nobody seemed to cite it.  I had a good handle on just how unpopular it was because it was written by a European physicist in my own exact research field, it was published in a respectable journal, the physicist gave talks about it…  And yet I’d literally never heard of him, his work, or his contribution to this idea.

Before I read the Nature paper I mentioned before on teams and disruptive science, I assumed that this paper I found and its lack of fanfare was a bad omen:  “That means his/my idea must be a bad one.”  I had a little pity party for myself and then I tucked the PDF and my notes into a file on my laptop only to review on rare and sentimental occasions.

But in light of reading the Nature paper, I’ve completely re-evaluated my attitude and thoughts toward both the idea and my predecessor’s paper.  Instead of setting it aside, I need to re-evaluate what low citations means in this case.

And as I thought about it more and included my own experiences in publishing papers, I realized that low citation rates could have at least three meanings for a paper.  I nickname these “the niche”, “the bad”, and “the visionary.”

 

The Niche Paper

 

For niche papers the low citation rate reflects the fact that no one really cares about the paper’s content.

 

There might be a few reasons for this.  One reason reflects the content itself.  It could just be an overly specific topic (like the singing habits of mice…don’t look shocked, mice do actually “sing”), or a topic that it’s nearly impossible to research because the tools and situations don’t exist yet (like extra-dimensional theories of how neutrino particles get their mass).

The other reason reflects a failure of communication.  Maybe the authors used completely different technical jargon or math notation than anybody else has in published work.  So even if we try hard, the rest of us just might not know what the heck they’re talking about.

But there’s a third possible reason suggested by reading a paper in Technological Forecasting & Social Change, which is the focus of this week’s Research Spotlight summary (link at the end of this post).  Maybe it’s an emerging field, working right at the edge of known knowledge.  As a result, it’s living in a sweet, but difficult, spot: at discovery’s edge.  At this point in history, it falls into a niche because both of the two above reasons will trip up the paper: (1) no one will care about it because it’s not “a thing” or “trending” yet; and (2) no one will understand what it’s talking about because the focus of study is so new or under-researched that many ideas, concepts and words will have to be invented to talk about it.

And by the way, don’t assume that “emerging” just applies to stuff in the last 5 years.  Sometimes emerging science takes decades to incubate, with just a few researchers keeping the embers alive, before it really takes off and becomes a new field of study in its own right.

Of course only the first kind of niche paper (the too specific) and the third kind (the emerging field) are potentially useful for breakthrough science, innovations, or inventions.  The second kind (the Greek-speak) just needs a good re-write.

 

The Bad Paper

 

For bad papers the low citation rate reflects the fact that the work it describes just wasn’t that good.

 

There are lots and lots of reasons, big and small, why a paper might be bad.  You could write volumes about this topic and, unfortunately, find lots of real examples to illustrate what you mean.  In fact, right now I bet you can picture an example you thought was junk work and that you still wonder to yourself, “How did that get (published/funded/awarded/bought/greenlit)?”

I have no desire to make this post a laundry list of complaints against certain papers I’ve seen (I have no patience with pessimism or destructive criticism).  The point here at The Insightful Scientist is to make progress toward scientific discovery and insight by finding fresh, valuable ways to move forward.  Not wallow and howl at the bad stuff people sometimes produce.

So let me stick to what you need to do here: recognize when a paper is “bad” so you can move on from it quickly.

Right now, I’ll just point out two reasons that are big red flags that you should avoid using a  paper at all, even to inform your own thinking, let alone to cite in one of your own writings.

First, if a paper uses inconsistent logic to either (1) justify its own findings or (2) compare itself to the works of others then you should consider it a “bad” paper and avoid it.  You don’t want that bad mental habit to rub off on you or to have your credibility tainted by association (you’ll need that credibility later on when you want to encourage a broader community to engage with your ideas).

Second, if a paper does not give sufficient information to evaluate its methods or conclusions then you should consider it a “bad paper” and leave it out of your information pile.  Again, it’s a bad habit, not laying out fully and clearly in writing what makes your work tick.  So do yourself a favor and find a better paper.  [The exception here is in sharing information about a patent or potentially patentable invention, where sharing too much detail could lead to problems in market competition.  But the answer is simple: if you publish you have an obligation to share.  The purpose of making something public by writing about it is to expand the public knowledge domain.  If you don’t want to share, don’t publish.]

What I like about using these two red flags, to seek and ignore bad papers that have wandered into your information orbit, is that you can check for them even if the paper is well outside your area of expertise.

And if a radical breakthrough is your goal, you should be reading outside your expertise.

I’ve been reading in sociology, biochemistry, and library sciences to try and answer a neutrino physics question (those other fields help improve my skill set ,which makes me more adept at tackling my own field).  Research suggests that this kind of intentional, broad information gathering can trigger radical insight.

Do what it takes to get the job done.  Read widely, and filter out bad papers as you find them.

 

The Visionary Paper

 

For visionary papers the low citation rate reflects the fact that the ideas presented are too far ahead of their time for others to recognize or act on yet.

 

I know, I know.  All you futurists, innovators, scientists, inventors, and entrepreneurs out there (myself included) are drooling over this category.

Visionary.

The word just smells of greatness, and we all want to make a contribution that will make it into this category.  So it’s only natural to get a little over-excited and want to label a paper related to your own “big dream” science or innovation as “visionary”.  It gives us a feel-good moment and a sense of fate, an image of what our own future might look like.

But if you remember my story from the beginning of this post, that kind of warm-and-fuzzy meets adrenaline-pumping moment is what got us into this awkward mess, sorting papers into categories, in the first place.  So here we are trying to be mature about this low citation paper and figure out what it means that someone else already came up with it, but no one paid attention.

On The Insightful Scientist I have made it my mission to learn how to be a pro at scientific discovery and share that with others.  So let’s get objective.  How can we tell if the ideas are ahead of their time?

I’ll assume that the paper has avoided any of the red flags that would make it a bad paper to rely on. (If you’re avoiding that evaluation because you’re afraid to see that paper not make the cut, have courage and be decisive.  If the paper is “bad,” it’s bad for your long-term discovery goals.)

As you evaluate the paper, remember that you’re at an advantage because you’re a “future human” 5, 10, 20, 40, even 100 years after the paper was written.  You know how some aspects of the “story” (i.e., the science) actually turned out and you can use that to help you evaluate.

Did this old paper have the right mindset—is it logically consistent, does it emphasize objectivity and evidence, and does it share information willingly?  Did other ideas presented in the paper turn out to be true or stand the test of time?  Did the paper get those ideas right, even though they were based on some false assumptions?  Are those false assumptions of the “past humans” who wrote the paper mostly a result of not having access to the data, technology, populations, or even big pots of money like we future humans have now?

What you’re really trying to figure out is if the authors had good research instincts (due to experience, mindset, or both), even in the face of limited resources.  If they did, then it’s possible they had honed their visionary skills about the topic and you might be looking at a visionary paper.  It may have provided a past blueprint for a good idea that the future can now act on.  If you want some examples of papers in this category, check out the link toward the end of this post.

And if your final decision is that the low citation paper you’ve got is visionary…build on it!

 

Learning to Sort Papers Like a Pro

 

If you remember, at the beginning of this post, I said this whole stream of thought came about because I had a low citation paper sitting in a neglected folder.  I’d originally, purely based on citation rate, dismissed it as “bad”.  But upon re-evaluating it I’ve decided it is  somewhere between niche and visionary.  I’m still working out which category I think it fits in best.

But the important point is that I’ve re-engaged with the paper and I’m wrestling with the science, ideas, and methods it presents in a much more thoughtful way.  I’m not falling in love with it (like a novice might) and I’m not dismissing it out of hand either (like an old-hand might).  I’m handling it like a pro who knows that when it comes to pursuing scientific discovery with deliberate skill, learning to distinguish between the niche, the bad, and the visionary is part of your job description.

 

Mantra of the Week

 

On a final note, before I sum this post up in a short bullet list, let me say this:

If you’ve read some of my past posts from 2018, especially the old versions, then you know I sometimes like to end with an artsy, one sentence tagline, and I use the post feature photo to illustrate it.

These one-liners are what I memorize to use as mantras when I start to get off-track during a task that’s supposed to help me innovate, invent, or discover.

This week’s one-liner is:

Awaken sleeping giants.

If you want to change the knowledge landscape then sometimes you have to dig into the past to find ideas that are sleeping giants.  Once awakened, the rumble and weight of their presence will cause heaven and earth to stand-up and take notice.  And as physicist Isaac Newton once supposedly said, “If I have seen further, it is by standing on the shoulders of giants.”

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I suggested a way to sort old unpopular papers in your information pile into three categories: the niche, the bad, and the visionary.
  • I pointed out why you should throw out papers falling into the bad category and consider building on papers in the niche and visionary categories.
  • I talked about how each of these categories of papers fit into the big picture of the pursuit of scientific discovery.

Do you have your own sorting and sifting criteria for papers?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – Carl Zimmer, “These Mice Sing to One Another — Politely,” The New York Times, February 28, 2019, https://www.nytimes.com/2019/02/28/science/mice-singing-language-brain.html.
  2. Web Article – “Like Sleeping Beauty, some research lies dormant for decades, study finds”, Phys.org website, May 25, 2015, https://phys.org/news/2015-05-beauty-lies-dormant-decades.html.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

Research Spotlight Summaries:

 

How-To Articles:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Low Citation Papers: The Niche, the Bad, and the Visionary”, The Insightful Scientist Blog, March 15, 2019, https://insightfulscientist.com/blog/2019/awaken-sleeping-giants.

 

[Page Feature PhotoStanding figure and reclining Buddha at the Gal Vihara site in Sri Lanka.  Photo by Eddy Billard on Unsplash.]