Author: Bernadette K. Cogswell

The Ugly Truth

The Ugly Truth

I love a good mash-up, so let me ask you this: What do you get when you mash-up the ideas of two prolific female academics, one in social work and the other in theoretical physics?

The answer is: my musings for this week’s post, which boils down to the phrase “the ugly truth.”

 

A Tale of Two Academics

 

So which two academics am I talking about and which two ideas?  Here’s a quick rundown (I’ve included links to their webpages at the bottom of this post in case you want to follow-up):

__________________________

Brené Brown – Ph.D. in social work

 

Currently based at the University of Houston, Brown studies topics like the intersection between courage, vulnerability, and leadership.  She’s an academic researcher, a public speaker, and runs a non-profit that disseminates much of her work in the form of research-based tools and workshops.

__________________________

Sabine Hossenfelder – Ph.D. in physics

 

Currently based at the Frankfurt Institute for Advanced Studies, Hossenfelder studies topics like the foundations of physics and the intersection between philosophy, sociology, and science.  She’s an academic researcher, a public speaker, and writes pieces communicating science to the public as well as maintaining a blog well-known in physics circles.

__________________________

In the midst of simultaneously reading the most recent popular books published by these two researchers (Dare to Lead by Brown and Lost in Math by Hossenfelder), I was struck by a link between the two.  That link had to do with the premise of Hossenfelder’s book and one of the leadership skills Brown promotes in her book.

Both of these involve the word “beauty.”

 

Sabine Hossenfelder’s Lost in Math

 

Hossenfelder argues that physicists (in her case, especially taken to mean theoretical particle physicists and cosmologists) have been led astray by using the concept of “beauty” to guide theoretical decision-making as well as lobbying for which experiments to carry out to test those theories.  By “using” I mean that she illustrates through one-on-one interview snippets how theorists rely on beauty to help them make choices about what to pursue and what to pass by.  She also illustrates, through a review of the literature, how theoretical physicists have tried to define beauty both with words (like “simplicity” and “symmetry”) and with numbers (through concepts like “naturalness”, the belief that dimensionless numbers should be close to the value 1).

According to Hossenfelder, this beauty principle does not drive the theoretical effort among just a small few, but among the working many.  And she thinks it’s a problem.  Her main  reason for pointing the finger is the belief that this strategy has not yet produced any new successful theoretical results in the last few decades.

The best quote to sum up Hossenfelder’s book in my reading so far is this:

 

“The modern faith in beauty’s guidance [in physics] is, therefore, built on its use in the development of the standard model and general relativity; it is commonly rationalized as an experience value: they noticed it works, and it seems only prudent to continue using it.” (page 26)

 

Funny that Hossenfelder should mention values.  Values are something Brown talks about at length.

 

Brené Brown’s Dare to Lead

 

The crux of Brown’s book Dare to Lead is about acknowledging and leveraging qualities that make us human (vulnerability, empathy, values, courage) in a forthright, honest, and authentic way in order to become better leaders.  Brown illustrates her concepts with numerous organizational and individual leader case studies peppered throughout the book, as well as copious academic research from her team on this specific topic.

According to Brown, the prime cause of a lack of daring leadership is cautious leadership, best expressed through the metaphor of entering an arena fully clothed in heavy duty armor.  The energy put in to developing and carrying the armor takes away from the energy left to masterfully explore the arena.

Here, I’m most interested in her thoughts on values and the role they should play in daring leadership.

In case you’re wondering, Brown defines leadership as “anyone who takes responsibility for recognizing the potential in people and processes, and who has the courage to develop that potential.” (page 4)

(It’s the idea of developing potential, which resonates with scientific discovery, that caught my eye when I read the back cover of the book on a lay-over in Amsterdam.)

Brown traces much of our motives to our values: they drive our behavior and determine our comfort level when we take actions that either align with (causing us to feel purposeful or content) or run counter to (causing us to feel squeamish or guilty) our values.

The best quote to sum up Brown’s discussion of values is this one:

 

“More often than not, our values are what lead us to the arena door – we’re willing to do something uncomfortable and daring because of our beliefs.  And when we get in there and stumble or fall, we need our values to remind us why we went in, especially when we are facedown, covered in dust and sweat and blood.” ( page 186)

 

One last detail from Brown’s book will prime you for my mash-up:  On page 188 of her book, Brown gives a lengthy list of 100 plus items (derived in her research) from which to identify your core values.

The ninth word down on the list of values?  Beauty.

 

Beauty is Just Another Motive

 

So here’s where the mash-up begins.  And let me throw in one more element, just to make it fun.  Let me put this all in a metaphor, like something from a cheesy crime procedural TV show.  Ready to put two-and-two together and solve a mystery?

So, according to Hossenfelder a crime against physics has been committed (the failure to come up with something new in a timely fashion, after spending a lot of money trying to come up with something new).

Physicists have taken advantage of the means (applying beauty as a guiding principle) and the opportunity (being employed as physicists, exclusively at academic institutions in her examples) to commit this crime.

If you watch enough crime shows, you’ll know the overused phrase that TV detectives rely on.  Find the “means, motive, and opportunity” and you’ll find your criminal.

Hossenfelder has already singled out physicists as the perps.  But as a detective she would be at a loss for motive (other than maybe, “everybody else was doing it and I wanted to keep my job”).

Here, I imagine Brown chiming in as her spunky detective partner.  Hossenfelder has laid out her analytic,  but impersonal accounting, and now Brown swoops in to add the humane touch.  “No, no, Sabine,” Brown says.  “Beauty was not the means; it was the motive. The means was getting the research funding, the students, the equipment.  But the motive, well that’s just people being people: it was the pursuit of beauty they could call their own.

Okay, maybe melodrama and mash-ups don’t go together so great, but this is an interesting line of thought:

Brown’s work suggests that the pursuit of beauty as a methodological choice may not just be about expediency or experience, but also about personal fulfillment.  That’s deep stuff.  And if it’s true, then it throws the idea of changing tactics into a different category.

Then it means your changing the motive, not the means.  Beauty isn’t just about a guiding principle that might work, it’s about what you believe gives your work meaning when it does succeed.  And convincing someone to change their motive is a much taller order than convincing them to change their means.  Especially if their motives are values-driven (whether they realize it or not).

 

If You Can’t Be the Change You Want to See in the World then Bring the Change

 

Trying to constrain what motives are most likely to bring about scientific discovery seems to me like it might be a fool’s errand.

Odds are it’s about the right time, right place, and right motive, to put you in a position to recognize the undiscovered.  In Hossenfelder’s defense, I think she is unwilling to accept human motives (in an appendix she advises that you try to remove human bias completely) because she’s afraid it will undermine the ability to understand the truth (understanding and truth are numbers 109 and 108 on Brown’s values list).  But there’s more than one way to reach an outcome.  If our motives are driven by values and run deep, then instead of asking scientists to change motives, we could also just bring in more people with different motives and give them a seat at the table.  In that way you bring these alternatives by bringing in people who value those approaches and use them by default.

And Brown’s values list includes a lot of words that easily might be interesting alternative motives (or guiding model-building principles), like adaptability, balance, curiosity, efficiency, harmony, independence, knowledge, learning, legacy, nature, order, and simplicity (just to name a few).

In the spirit of a seat at the table of debate, Hossenfelder’s book offers a counter-value to the beauty principle in model-building (understanding and truth).  And Brown’s book offers a counter-value to the stoicism principle in leadership (courage and vulnerability, # 24 and # 113 on her values list).  These two researchers bring their own motives and values, serving as the bearers of not only alternative perspectives, but more importantly alternative actions that might help make progress.

[In case you’re wondering, my three core values, in priority order, are hope (# 52 on the list), respect (# 87), and affection (not on Brown’s list).  That may help clarify my motives for everything on The Insightful Scientist website.]

 

The Ugly Truth

 

You might wonder why I suggest giving more people with different values a seat at the table, your discussion round table.

Why not just try one set of values and if it doesn’t work then replace them with a new set of people with different values?  Or why not just try and change your values until you achieve success?

The tricky thing about values is that it’s hard to change them once their set, usually sometime in middle childhood.  The useful thing about values is that it’s also hard to put yourself in someone else’s shoes.  That lack of imagination, empathy, and sympathy usually turns into skepticism.  And skepticism done right can be tremendously helpful to science, especially when it comes to verifying possible discoveries.

If we can’t understand, or don’t agree with, someone else’s motives then we automatically want and need more data and evidence to agree with their conclusions.  We set the bar of proof higher when it’s an ugly truth (to us) than when it’s a beautiful explanation (to us).

For example, suppose one scientist believes that embracing complexity captures the wonder of nature by valuing diversity, while another scientist believes that simplicity captures the wonder of nature by valuing connection.  We may find that while one of these people thinks needing many models for specific cases has a greater feel of “truthiness” the other person believes that having as few models as possible means you’re on the right track.  The gap between these two approaches must be bridged because at the end of the day scientific discovery is about consensus converging to a base set of truths through observation and evidence.  Filling the gaps between scientific findings and their associated motives ensures that science has a more solid foundation.  And, in our example, we may find that while at one time resources make simplicity the better strategy, at another time complexity may be just the thing for a breakthrough.

Conflicting values and the guiding principles they generate in scientific work are like unfamiliar or misshapen vegetables usually hidden from view.  It’s going to take more convincing to put money into it by buying it, to invest effort in it by cooking it, and to be willing to internalize it by swallowing it.  You’d maybe rather just ignore it or toss it in the garbage.  But you never know, one person’s ugly truth may turn out to be another person’s satisfying ending.  If we don’t all sit down and share a meal together, how will we find out?

 

Interesting Stuff Related to This Post

 

  1. Website – Brené Brown’s homepage
  2. Website – Sabine Hossenfelder’s blog Backreaction
  3. Elisabeth Braw, “Misshapen fruit and vegetables: what is the business case?”, The Guardian (online), September 3, 2013, https://www.theguardian.com/sustainable-business/misshapen-fruit-vegetables-business-case.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “The Ugly Truth”, The Insightful Scientist Blog, August 9, 2019, https://insightfulscientist.com/blog/2019/the-ugly-truth.

 

[Page feature photo:  A pretty, pert bunch of Laotian purple-striped eggplants, roughly the size of ping pong balls. Photo by Peter Hershey on Unsplash.]

Don’t Curate the Data

Don’t Curate the Data

It’s tempting when we talk to others about our ideas to only want to share the good stuff.  To only share the things we think are logical, sound reasonable, maybe only the things we think (or hope) will make us seem smart and focused.  But this tendency to re-frame our real experiences and distill them into nice little stories we can tell people over coffee or a beer can be a dangerous setback to getting better at a new skill set.

 

Trying Too Hard to Look Good

 

Why?  Because sometimes we are so busy trying to think about how to tell (or should I say sell) others on what we’re doing or thinking that we scrub our memories clean of the actual messy chain of events that led us to come up with the polished version.  That messy chain, and every twist, turn, and chink in its construction, is the raw knowledge from which we can learn about how we, or others, actually accomplish things.  I’ll call it “the data.”

So this fear of how others will perceive our process is one thing that gets in the way of having good data about our process.  We start to curate the data to make ourselves more acceptable to others.

But we need this data to gain a meaningful awareness of what we actually do to produce a certain outcome.  This is even more important when we try to figure out how to reproduce a mental outcome.

Maybe you came up with a winning idea once, but now you’re not sure how to get the magic back.  Or maybe you want to pass your strategy on to a younger colleague or friend, but don’t really know what you did.  Maybe you’re hoping to learn from someone else who succeed at thinking up a breakthrough solution, but they say “I really don’t remember what I did.  It just sort of came together.”

Which brings us to a second thing that works against having access to good data about our own interior processes and patterns.  Memory.

 

Mining Memory is a Tricky Business

 

We all know we don’t have good memories, even when we are trying hard (studying for tests in school, or trying to remember the name of every person in a group of ten new people you just met are classic examples).  Memory is imperfect (we have weird, uncontrollable gaps in what we retain).  Memory is selective (we have a tendency to be really good at remembering what happened during highly emotional events, but not during more mundane or routine moments).  Memory is pliable (the more we tell and retell a version of something that happened to us, the more likely we are to lose the actual memory in place of our story version).

These tricks of memory not only frustrate us when we try to observe and learn from ourselves, but also when we try to learn from others.

There have been lots of interviews with famous scientists who made discoveries asking them about how they did it.  But their self-reported stories are notoriously unreliable or have big gaps because they, like us, are subject to the fickle whims of memory and the hazards of trying to tell your own biography one too many times.  Mining memory for useful insights is a tricky business.

So memory and lack of awareness (or mindlessness) cause us to lose access to the precious data we need to be able to see our behaviors and patterns from a larger perspective in order to learn from them and share them.

When I first started learning about scientific discovery, recognizing these pitfalls of bad memory and mindlessness caused me a lot of annoyance.  I would think of a great example of a scientific discovery, such as a discovery that shared similarities with an area or question I wanted to make discoveries in.  I’d think, “Perfect!  I’ll go read up on how they did it, how they discovered it.  What were they reading, what were they doing, who were they talking to?”  But of course, answers to those questions wouldn’t exist!

Maybe the discovery was of limited interest so nobody bothered to ask those questions and now the discoverer had passed away.  Or maybe the discovery was huge and world changing but the histories told about it tended to re-hash the same packaged myths—like Newton and the apple falling inspiring ideas about gravity, or Einstein taking apart watches from an early age leading to picturing little clocks when working out the effects on time of traveling near light speed in special relativity.  Part fact, part fiction, these stories leave hundreds of hours of more mundane moments, links in the mental chain, unilluminated.  Good data that could guide future generations gets lost, sacrificed on the altar of telling a whimsical story.

So when I sat down in September of 2018 to start trying to work out a more modern definition of scientific discovery—something pragmatic that you could use to figure out what to do during all those mundane moments—I kept thinking about how to better capture that process of obtaining insights, as you go.

That’s when I realized we already have the methods the problem is we always want to curate the story told after the fact.  And rather than curating the data that make it into the story (i.e., creating an executive summary and redacting some things), we end up actually curating the source data itself (i.e., never gathering the evidence in the first place).  In other words, rather than just leaving out parts of the story, we actually tune out to parts of the story as we are living it, so that we literally lose the memory of what happened all together.

But that story is the raw data that fields like metascience and the “science of science” need to help figure out how scientists can do what they do, only better.  And as scientists we should always be the expert on our own individual scientific processes.  The best way to do that is to start capturing the data about how you actually move through the research process, especially during conceptual and thinking phases.  Capture the data, don’t curate the data.

 

A Series of Events

 

Let me give you a real life example to illustrate.  As I said, I sat down to try to come up with a new definition of scientific discovery.  I’m a physicist by training.  Defining concepts is more a philosopher’s job, so at first I had a hard time taking myself and any ideas I had seriously.  I got nowhere for three months; no new ideas other than what I had already read. Then one day a series of events started that went like this:

I read a philosophy paper defining scientific discovery that made me very unhappy.  It was so different than my expectation of what a good and useful definition would be that I was grumpy.  I got frustrated and set the whole thing aside.  I questioned why I was studying the topic at all.  Maybe I should stick to my calling and passion, physics.  I read when I’m grumpy, in order to get happy.  So I searched Amazon.  I came across a book by Cal Newport called So Good They Can’t Ignore You.  It argued that passion is a bad reason to pursue a career path, which made me even grumpier; so grumpy I had to buy the book in order to be able to read it and prove to myself just how rightfully disgruntled I was with the premise.

Newport stresses the idea of “craftsmanship” throughout his book.  I was (and still am) annoyed by the book’s premise and not sold on its arguments, but “craftsmanship” is a pretty word.  That resonated with me.  I wanted to feel a sense of craftsmanship about the definition of scientific discovery I was creating and about the act of scientific discovery itself.

I didn’t want to read anymore after Newport.  So I switched to watching Netflix.  By random chance I had watched a Marie Kondo tidying reality series on Netflix.  Soon after, Netflix’s algorithm popped up a suggestion for another reality series called “Abstract: The Art of Design.”  It was a series of episodes with designers in different fields, like architects, Nike shoe designers, theater and popstar stage shows set designers, etc.  It pitched the series as a behind the scenes look at how masters plied their craft.  Aha, craftsmanship again!  What coincidence.  I was all over it (this was binge watching for research, not boredom, I told myself).  I was particularly captivated by one episode about a German graphic designer, Christoph Niemann, who played with Legos, and whose work has graced the cover of The New Yorker more than almost any other artist.  The episode mentioned a documentary called “Jiro Dreams of Sushi.”

Stick with me.  Do you see where this is going yet?  Good, neither did I at the time.

So I hopped over to Amazon Prime Video to rent “Jiro Dreams of Sushi” about a Japanese Michelin rated chef and his lifelong obsessive, perfectionist, work ethic regarding the craft of sushi.  At one point the documentary showed a clip of Jiro being named for his Michelin star and they mentioned what the stars represent: quality, consistency, and originality.  Lightbulb moment!  Something about the ring of three words that summed up a seemingly undefinable craft (the art of creating delicious food) felt like exactly the template I needed to define the seemingly undefinable art of creating new knowledge about the natural world.

So I started trying to come up with three words that summed up “scientific discovery”.  Words that a craftsman could use to focus on elements and techniques designed to improve their discovery craft ability.  There were more seemingly mundane and off-tangent moments over a few more months before I came up with the core three keywords that are the basis of the definition I am writing up in a paper now.

The definition is highly unique, with each term getting its own clear sub-definition that helps lay out a way to critically examine a piece of research and evaluate it for its “discovery-ness”, i.e., its discovery potential or significance.  It’s also possible to quantify the definition in order to try and rank research ideas relative to one another for their discovery level (minor to major discovery).

It’s a lot better idea than some of the lame generic phrases that I came up with in the early days, like “scientific discovery is solving an unrecognized problem ” (*groan*).

On an unrelated track at that time, I was reading Susan Hubbuch’s book, Writing Research Papers Across the Curriculum, and had come across her idea that you create a good written thesis statement by writing out the statement in one sentence and then defining each keyword in your statement using the prompt “By <keyword> I mean…”.  So then I took the three keywords I had come up with and started drafting (dare I say crafting?) their definitions in order to clarify my new conception of “what is scientific discovery?”

So that’s the flow…my chain of discovery data:

Reading an academic paper led to disgust; disgust led to impulse spending; impulse spending brought in a book that planted the idea of craftsmanship; craftsmanship led to binge-watching; binge-watching led to hearing a nice definition of something unrelated; the nice definition inspired a template for how to define things; and simultaneously reading a textbook suggested how to tweak the template to get a unique working definition down on paper.

How do I know all this?  I wrote it down!  On scraps of paper, on sticky notes, in spiral notebooks, in Moleskines, in Google Keep lists, Evernote notes, and One Note notes (I was going through an indecisive phase about what capture methods to use for ideas).

I learned to not just write down random thoughts, but also to jot down what inspired the thought, i.e., what was I doing at the moment the thought struck—reading something, watching something, eating something, sitting somewhere, half-heartedly listening to someone over the phone…(Sorry, Mom!)?  Those are realistic data points about my own insight process that I can use later to learn better ways to trigger ideas. (And, no, my new strategy is not just to watch more Netflix.)

 

Make a Much Grander Palace of Knowledge

 

Instead of trying to leave those messy, mundane, and seemingly random instigators out, I made them part of my research documentation and noted them the way a chemist would note concentrations and temperatures, a physicist energies and momenta, a sociologist ages and regions.

And then I promised myself I wouldn’t curate the data.  I wouldn’t judge whether or not impulse book buying is a great way to get back on track with a research idea, or whether or not Marie Kondo greeting people’s homes with a cute little ritual is a logical method of arriving at a template to devise operational definitions.  I wouldn’t drop those moments from memory, or my records of the research, in order to try and polish the story of how the research happened.  I’ll just note it all down.  Keep it to review.  And maybe share it with others (mission accomplished).

Don’t curate the data, just capture the data.   Curation is best left to analysis, interpretation, and drawing conclusions, which require us to make choices—to highlight some data and ignore other data, to create links between some data and break connections among other data.  But think how much richer the world will be if we stop trying to just tell stories with the data we take and start sharing stories about how the data came to be.  The museum of knowledge will become a much grander palace.  And we might better appreciate the reality of what it is like to whole-heartedly live life as a discoverer.

 

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

How To Posts:

 

Research Spotlight Posts:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Don’t Curate the Data”, The Insightful Scientist Blog, August 2, 2019, https://insightfulscientist.com/blog/2019/dont-curate-the-data.

 

 

[Page Feature Photo: The gold dome in the Real Alcazar, the oldest used palace in Europe, located in Seville, Spain. Photo by Akshay Nanavati on Unsplash.]

Good Things Come in Threes

Good Things Come in Threes

Have you ever watched a movie or TV show, or read a book, where at the end of the story the main character saves the day by doing something unbelievable?   By unbelievable I mean that they do something completely out of character.  This kind of ending can leave a bad taste in your mouth, as if the writers didn’t do their job in making us believe the character had changed enough to become a person who could behave that way by the end.

When I was working on my degree in creative writing, there was a phrase that summed up the problem:

“Once is an accident, twice is a coincidence, three times is a pattern.”

The idea is that people open up to the possibility that something is plausible by seeing relevant elements happen enough times that we decide a pattern is believable.  It’s kind of a “conception through perception” game.  If a character behaves in ways that build up to the ending then we consider the ending reasonable.  But if we don’t see enough evidence then we find it hard to believe and the ending will seem like a cheap magic trick and a waste of our time (and money).

In my own experience, I’ve found it pays to be aware that this little rule of three affects not only how writers convince us of story endings, but also how we convince ourselves that some of our ideas merit pursuit.

That’s because deciding if a research idea is worth investigating is really about deciding if there’s enough of a pattern there to plausibly lead to an interesting ending…and hopefully that ending will be a scientific discovery.

So let’s talk about how to translate this magic of the number three from creative writing into research in a way that will help us decide if a research idea should move to the top of our to-do list or get shuffled to the back burner.

 

THREE…Essential Elements of an Idea

 

Most of our time as scientists is spent in the “articulation” and “evaluation” phases of scientific discovery.  Meaning, we worry a lot about defining our ideas and assessing if they are useful, correct, and/or meaningful.

In starting on a research topic, it can be hard to formulate a clear awareness of what we mean by new ideas.  And once we’ve jotted something down on paper, or typed it up, it can be difficult to decide if the idea seems worth focusing on.  The tendency is to have conversations in your head about it and then put it on the mental back burner because of the feelings of “riskiness” that working on discovery-level science can bring up.

If you’re stuck with a sense that you “have an idea”, but that you couldn’t yet share that idea with someone in a three-minute sound bite then here’s something to try.  You can write this down, type it up, do a voice memo, or some combo of all three.  Whatever works for you.  I’ll use pen-and-paper writing as my example since that’s how I prefer to work:

 

  1. Write down the idea you are trying to get clear in your head as a one word prompt. Stick to one word, no phrases or sentences.
  2. Spend a few minutes (no more than 15) just thinking about the idea behind your one-word prompt. Now, write down three more essential words that capture the heart of the idea.  These new words should sum up the essential elements, features, behaviors, or requirements of your prompt word.  Again stick to just three words, no phrases or sentences here either.  But you must write down at least three words, no less.
  3. Now create a list numbered one to three. For each number write down what you mean by each of the essential words.  You can write in phrases or sentences here.  But keep it to no more than 1-2 sentences per numbered item.  Start each numbered item with the prompt “By <essential word> I mean…”  You can spend up to one whole day to complete this list.  But finish this whole exercise (steps 1-3) in 24 hours or less.

 

This little exercise can help you generate a clearer picture of your idea by forcing you to pick and choose what matters most to you and define it.

That’s where you as a scientist bring your best asset, your personal diversity, to the playing field.  Don’t use other people’s words or definitions for this exercise.  Set your phone aside.  Don’t use Google.  Don’t use textbooks or published papers.  Just use what you’ve already got inside your head.

I cap the time you spend on it at 24 hours to keep you from overthinking it.  The goal here is to make a rapid decision—“research this” or “shelve this.”  You want to build momentum, not stall out in the graveyard of analysis paralysis.

The reason I say identify three essential words goes back to the accident-coincidence-pattern idea.  Three words is a good sweet spot to help make abstract ideas more concrete.  Think of it like triangulating a signal: getting three points of reference lets you narrow down and enclose your idea in a more well-defined area.

 

THREE…Sources of Information

 

At this point it’s helpful to get out of your own head and take a look at what other people are saying about your idea.  In theory, you probably started out by reading the work of others or listening to someone speak, which helped spark the idea you are working through now.  So you may already have some good sources to look over again.

The goal is to get three sources (by “source” I mean a written or spoken piece of work) you can compare against the idea you formulated in the previous exercise.  You want to read them (or re-read them) and compare how you formulated your idea to how the author(s) or speaker(s) formulated it.

The most important thing is to find good quality sources to help evaluate your idea.

If you don’t know how to find or consider sources for their quality, here are some tips:

  • Look for good quality information, not good quality authors. That means you want sources that are complete, accurate and have minimal bias (or consciously acknowledged bias).  Authors, writers, scientists, journalists, etc. are only human.  No one produces good quality work all the time.  Evaluate each information source individually; don’t just assume that famous names, or even people you know who usually do good work, put in that effort this time.  We all have off days.
  • Value sources that speak most directly to the idea you are working through with real data and more references to explore. Be open to traditional (peer-reviewed published articles, monographs, academic books, etc.) and nontraditional (blogs, popular science outlets, podcasts, etc.) sources.  Evaluate each source individually.  I usually rank items with real data (even if it’s just a thoroughly explained personal example) and that reference other good quality sources I can freely access (no paywalls) more highly than ones that are tangential to my topic or only talk in general terms.
  • Try to get a good variety in your three sources. Make sure they are all by different authors or speakers.  Try to get different perspectives in each one, i.e., the authors are from different fields, different career stages, different job sectors, are different genders, ethnicities, ages, nationalities, etc.  The sources don’t need to tick all these boxes, but do the best you can.  Try to ensure that you don’t rely too heavily on just one voice in the debate, which could cause you to repeat what’s already been done instead of trying something new.

 

Again, don’t over think this.  I’d limit the time you spend on this to one week.  Do the best you can with the information you have access to.

Once you’ve got these sources, spend some time reading them and noting the differences between how you articulated the idea and how they articulated the idea.  You’re looking for similarities, differences, things they mention that you left out completely, and things you mention that they ignore (this last one is where scientific discovery lives).

 

THREE…Mental Examples

 

Now it’s time to move out of the “rainbows and butterflies” world and into the “bricks and mortar” world.

What I mean by this is that in the beginning we tend to be pretty excited, enthusiastic, and confident about our own ideas when they’ve only existed in our head.  This is the “rainbows and butterflies” world.  These feelings are a good way to generate momentum to get started on a project and they encourage “thinking.”   But they’re not very helpful to encourage “doing.”  Doing requires having a clear idea of what the next action is.  That’s the “bricks and mortar” part.  Rainbows and butterflies are inspiring, they captivate and focus our mental attention, but they are hard to hold in your two hands.  With bricks and mortar it’s much easier to grasp how to start building something.

Applying your idea to examples is a way to get started on the bricks and mortar “doing” and to see if you’ve missed out on any major facets of defining your idea so that it’s open to scientific investigation.  I like my three examples to cover three types (three is still the magic number!):

  1. An example that fits your idea really well (an “exemplar”).
  2. An example that doesn’t fit your idea at all (a “counter-example”).
  3. An example where it’s hard to tell if it fits your idea or not (a “neutral example”).

 

Covering these three bases will encourage you to be deliberate and thoughtful and to assess your idea for its strengths (illustrated by the exemplar) its weaknesses (illustrated by the counter-example) and its limits and areas for improvement (illustrated by the neutral example).

You want to develop a more realistic understanding of what your idea is (you could tell someone about the exemplar in conversation as a way to help describe your idea) and to acknowledge its limits and shortcomings.

If the limits make the idea not useful, or the shortcomings show up for examples that are what you were trying to explain, then I find it’s best to go back and trying redefining my idea.  Try changing up the essential words or changing their definitions until you have an idea that holds up better to this simple evaluation method.

 

THREE…Drafts

 

Now you’re ready to put your idea into a working definition that you can make a decision on.

I know, I know: all of that work just to get to what most people consider the starting point for research!

That’s why the tagline for The Insightful Scientist is “Discovery awaits the mind that pursues it.”  Mental preparation and technique are a huge part of being a scientist and trying to make scientific discoveries.  Learning processes and strategies to wield our mindset more effectively is one of the best ways to run a winning race in pursuit of discovery.

The point of all this mental preparation is to give yourself a clear picture of where your idea stands and the challenges and advantages to trying to investigate it.  That is what gives you the ability to decide if it should move to the top of your to-do list or move to your mental back burner.

This last step ensures that you have something concrete to either (1) return to later if the idea doesn’t make the to-do list for now, or (2) act on right away if it does make your to-do list.

So set aside a day or two for this and type or write (no voice memos here) a formulation of your idea that is in complete sentences and includes both your prompt words, the essential words you identified, and their definitions.  Keep the entire working definition to a minimum of one sentence and a maximum of 5 sentences (i.e., a paragraph).  If you prefer word count goals, try for something in the 100 to 250 word range.

Write three drafts of your working definition:

  • First write a “rough draft” that just gets all the basic elements of your working definition (one word prompt, three essential words, definitions of those essential words) in there in grammatically correct language with proper spelling.
  • Then write a “second draft” that most likely changes some core features of the definition, like the essential words or their meanings, or adds on to clarify exactly what you mean.
  • Then write a “third draft” that tries to cut down on unnecessary words, overly complicated phrases, or overly technical words. Just include the essential in your definition, not the useful or the interesting.

 

Once you’ve got your third draft of your working definition it’s up to you to chart your own course and make a decision: are you going to research this idea or not?  With all that mental preparation you’re in a much better spot to make a more thoughtful decision and you could explain that decision to someone else.  Game. Set. Match.

 

Good Things Come in Threes

 

So that’s how I translated the idea of “Once is an accident, twice a coincidence, and three times a pattern” into a way of gathering information to decide what scientific ideas to pursue right now.  In fact, I just used it last week to finally decide that one of the many working definitions of “scientific discovery” I have come up with over the last 8 months is worth putting into a paper to submit to the open access philosophy journal Ergo by later this year.

It’s important to point out that this general rule of three is not (necessarily) sufficient for a scientific investigation to be rigorous.  That depends on the method being used.  This rule of three is more about how to decide if fledgling ideas or flashes of insight from brainstorms are worthy of becoming methodical scientific studies.  But as a general mental rule, especially if you’re feeling trepidatious, giving yourself a set of three (sources, examples, key words, ideas, sounding boards, etc.) can be an effective way to help you decide what makes the cut.

There’s another saying that also relies on the number three:  “Good things come in threes.”  In science accidents spark awareness, coincidences spark curiosity, and patterns spark discoveries.

So maybe there is power and magic to the number three.

Of course there’s only one way to find out if my anecdotal use of the number three will lead you to your own epic story of discovery: take a chance, roll the dice, and jump in with an open mind to try it out.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Good Things Come in Threes”, The Insightful Scientist Blog, July 26,2019, https://insightfulscientist.com/blog/2019/good-things-come-in-threes.

 

[Page Feature Photo: Close up image of red dice. Photo by Mike Szczepanski on Unsplash.]

Be a Person of Many Hats

Be a Person of Many Hats

When someone asks you what you do for a living, how do you answer?

Do you give your job title?  Do you say what kinds of project(s) you are working on?  Do you give your company name or name of the topic you work on?

From researchers of all stripes, working in non-profits, volunteer and hobby groups, schools, universities, industry, and government, you hear many answers.  But when scientists get together I’ve noticed people tend to label themselves as one of four “flavors” of scientist: as an experimentalist, a theorist, a computationalist, or a citizen scientist (sometimes called a “hobbyist” or “amateur scientist”).

Often times, scientists will use these labels when they get nervous about having to answer questions.  If you listen to or watch videos of a lot of science talks for scientists, you might have noticed this too.

I’ll give you a few examples from physics:

If someone is asked an intensely mathematical question they might say “I’m just an experimentalist, so that’s above my paygrade.”  If someone is asked to defend the possibility of building a real prototype they might say, “Oh I’m just a theorist, so I don’t know about building things, I can just tell you the physics is there.”  If an audience member asks a question that gets a dismissive response from a speaker, they might say “I was just curious.  I follow the topic as a hobby, but I don’t really keep up with the details.”

Lately, as I’ve started studying connections between researching fundamental physics and the science of scientific discovery, I’ve been asked many times, “What would you call yourself?”, “How should I introduce you to people?”, or “What would you say you do?”

Which got me thinking about how we see ourselves as scientists.  And I’ve started to wonder if using labels as personal identities might be hurting our attempts to actually discover things.

 

Finding the Third Way

 

So, “experimentalist”, “theorist”, “computationalist”, and “citizen scientist”.  First off, I should define what I mean by these words:

“Experimentalists” conduct laboratory experiments to gather new data and generate equations to describe data they’ve collected.

“Theorists” look through old, new, and especially anomalous data to invent new descriptions and equations to explain the misunderstood and to predict the unobserved.

“Computationalists” run large-scale precision calculations on computers to simulate meaningful phenomena and generate equations to capture the real world in a form they can put on computer.

“Citizen scientists” conduct projects to satisfy their curiosity and support their community and generate equations for joyful distraction or to improve the quality of life of a group they care about.

I think these labels apply to any scientific field—agriculture, psychology, geology, chemistry, physics, computer science, engineering, economics, you name it.  And I emphasize equations because I think that’s what distinguishes the fine arts (literature, music, art, dance, etc.) from the sciences.  The sciences try to represent Nature using numbers, language, and symbolic math, while the fine arts try to represent Nature using sound, light, movement, color, texture, and shape.

Like I said in the opening of this post, I certainly see people use these words to navigate tricky audience questions.  But I also think they get used in two other ways, depending on what kinds of scientific discoveries people are pursuing: longstanding problems in mature fields, or unrecognized opportunities in emerging fields.

 

Work Identity

 

In mature fields, the kinds with lots of funding and famous teams that people can name off the top of their head, I think three of these four labels (experimentalist, theorist, computationalist) are used by scientists and that they mean them as a sort of personal identity.  That’s because mature fields tend to have larger networks of people working in them.  With larger networks comes more specialization (to help manage the large volume of people and ideas).  People get assigned to roles and they develop expertise in that particular role over the course of their work career.

In mature fields even training tends to start labeling people early.  For example, at my current institution undergraduates in their first year are already assigned to “Physics Theory” track (which requires fewer lab hours and more math) versus “Physics” track (which requires more lab hours and less math).  And in the United States at the Ph.D. level students are divided into either experimental or theoretical tracks.  Computational folks usually fall into one or the other track as a sub-category, depending on whether or not they mainly work on simulations for large experimental collaborations, or simulations for a small (maybe five people or less) theoretical group.

Meanwhile, the pursuit of scientific discovery in mature fields tends to take the form of trying to answer longstanding open questions.  The kind that make headlines in popular science journals.  In physics these are things like the nature of the early universe or why the universe has more matter than antimatter.

When individual scientists choose to see labels like experimentalist, theorist, or computationalist as work identities, they engage with discovery in more limited ways.  They do so only to the extent that the field at large has decided they should have a role in it.

So, for example, if anomalous data is generated by an experimental group, but the field decides that it’s most likely an experimental error causing the blip, then computationalists and theorists will be discouraged from contributing to the discussion, or will suffer a hit to their credibility if they join the debate.

 

Stay in your lane.

 

Work identities are kind of like a rule that says, “Stay in your lane.”  But if the key finding is to be found by taking an off-ramp, then progress will be slow or non-existent because there’s not enough freedom of intellectual movement.

Also, I mentioned at the beginning that only three of the four labels appear in mature fields.  There’s rarely any place given to the voices of citizen scientists or hobbyists at all.

 

Work Ethic

 

On the flip side, there are emerging fields and topics.  These areas are so new that very few people are actually studying them, no rules have been established yet, and even the kinds of discoveries being pursued are hard to define.  Emerging fields are uncharted territory so anything is possible.

With so few people working on them, emerging topics don’t need hierarchies, they just need bodies willing to do the work.

So an experimentalist will be someone who values running a huge amount of tireless trial and error.  A theorist is someone who values digging around to think up reasons, and ideas, and questions.  In emerging fields you are more likely to be dismissed by co-workers until the value of the project proves itself and gains more acceptance in the mainstream. So taking on a hobbyist work ethic becomes more important as you have to value things like “passion” and “obsession” to keep people motivated through the tough times.  And a computationalist is someone who values grinding through data on computer until all those numbers start to look like a pattern.

 

Mindset over matter

 

So in science, I think that means the labels we usually think of as identities in mature fields become a kind of work ethic in emerging fields; a style of taking on each and every task to bootstrap your way to a successful breakthrough.  They are not so much you, as they are the mindset you approach them with.

This mindset over matter approach is what allows researchers in emerging fields to pursue high-risk opportunities that may lead to scientific discoveries, or may prove to be dead ends.

But this still puts the brakes on the speed with which discoveries could be made, because I think researchers still feel like they have to find people who either innately have that mindset, were raised with that mindset, or have acquired that mindset by experience or training.

In other words, in both mature and emerging fields these labels are seen as compartmentalized rather than fused—you can own one, but not the others.

 

Troubleshooting Approach

 

That brings me back to the cryptic header I started this post with, “Finding the Third Way”.  I think of this as “finding the middle way”.  To me that means using these labels as skillsets and thinking of the whole pursuit of scientific discovery as a troubleshooting exercise.

The trouble might be that you’re bored and you want something interesting to do with your weekends, so you’re going to volunteer as a citizen scientist to contribute to research on soil health in your local area…just because you love veggies.

Or the trouble might be that you’re tired of having patients die on your watch from a preventable condition, so you’re going to raise money to run experiments on cheap lifestyle interventions to reduce the number of deaths.

Or the trouble might be that you think nuclear weapons are dangerous, but there’s all this plutonium sitting around in stockpiles with no safe, permanent way to get rid of it, so you’re going to dig into all the theories on how to dispose of anything that might give you a breakthrough idea to help solve the problem.

My point is that we solve problems that matter to us.  Personal problems, social problems, global problems.  But the problems are what matter most, not the fields.  Scientific discoveries are often made because their discoverers saw a problem that they couldn’t let go of and so they worked until they found a way to solve it.

These aren’t abstract, philosophical things.  They are practical, specific challenges that we tackle one troubleshooting step at a time.  And over the course of solving that problem, every one on of the roles I’ve mentioned will probably come into play.

So instead of always looking, or waiting, or hoping that we can involve someone willing to take on “the experimentalist”, or “the theorist”, or “the computationalist”, or “the citizen scientist” responsibilities, we should consider building up a reserve of each of those things within ourselves.

 

Moving Beyond Our Training

 

If we want to give ourselves the best chance of solving a problem that matters to us and discovering something along the way, then maybe we shouldn’t be just one of those things (experimentalist, theorist, computationalist, hobbyist) in our lifetime.

Maybe we should be all of those things at one time or another.

They’re just skills.  Not destiny.

Like the logo for my website says, “Discovery awaits the mind that pursues it.”  And I chose “The Insightful Scientist” as my websites name for reason.  Because I wanted to remind myself every day that I pull open the homepage that science is about discovery and that science is bigger than just physics or just the ways I was trained to pursue discovery as a theoretical physicist.

We use our training as far as it will take us. But if the science is bigger than our training then we don’t give up, or say it’s a job for someone else.  We just stretch our minds a little wider open, learn a new skill, and jump once more into the fray.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Be a person of many hats.

So when people ask me what I am or what I do I think I’ll start saying:

“I’m a Bernadette.”

“And it just so happens that the problem I’m trying to solve right now is how to put the science of scientific discovery into practice in neutrino particle physics.”

I won’t label myself as a theorist, or a neutrino physicist, or an academic.  Because the titles don’t matter.  The problems we’re trying to solve do.

There’s an English expression that says taking on different roles at work is like wearing different hats.  Well, I’m willing to wear whatever hat gets the problem solved, even if I don’t look good in fedoras.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I narrowed down the labels we use for scientists to four: experimentalist, theorist, computationalist, and citizen scientist.
  • I classified scientific discovery into two types: trying to answer longstanding questions in old fields and recognizing new opportunities in young fields.
  • I argued that we use the four labels as identities or work ethics; but that a more agile approach is to think of them as skillsets.

Have your own thoughts on how we label ourselves as researchers and whether or not this helps or hinders the pursuit of scientific discovery?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Website – Chandra Clarke’s Citizen Science Center, sharing open science projects.
  2. Web article – Angus Harrison, “Self-taught rocket scientist Steve Bennett is on a mission to make space travel safe and affordable for all – from an industrial estate in Greater Manchester,” interview in The Guardian online, April 4, 2019, https://www.theguardian.com/science/2019/apr/04/building-rockets-all-over-house-space-travel-safe-affordable-for-all.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Experimentalist, Theorist, Computationalist, Citizen Scientist: Work Identity or Work Ethic?”, The Insightful Scientist Blog, March 29, 2019, https://insightfulscientist.com/blog/2019/be-a-person-of-many-hats.

 

[Page Feature PhotoFedoras fill a costume rack at the Warner Brothers movie studio in Burbank, California.  Photo by JOSHUA COLEMAN on Unsplash.]

Misfits Matter

Misfits Matter

How to use the trial and error method to make a scientific discovery.

 

I like moving, exploring new places, and visiting friends and family (for short manageable doses).  I can put up with traveling for work.  But one thing never ceases to annoy me:  Whenever I take a shower for the first time in a new place, I can’t for the life of me get the knobs, handles, and faucets to work right the first time.  I spend at least five minutes trying to get the water to stop being boiling or freezing, or trying to get the dribble out of the shower head to be decent enough to rinse.  Maybe you can relate.

But I’ll bet it never occurred to you that how you solve this problem is really scientific discovery skills in action:  you start fiddling with all the water controls you can see.

That’s because it’s is a classic example of doing the right kind of trial and error.  So I’ll use it to outline what I think are four key dimensions that help structure trial and error for discovery:

  1. Putting in the right number of trials
  2. Putting in the right kinds of trials
  3. Putting in the right kind of error
  4. Putting in the right amount of error

The overall theme here is this — it ain’t called “trial and success” for a reason.  The errors are part of the magic…that special sauce…the je ne sais quoi…that makes the process work.

You may have seen versions of this idea in current business-speak around innovation and start-ups (the Lean build-measure-learn cycle anyone?).  But I needed to take it out of the entrepreneurial context and put it into a science one.

So let’s get down to brass tacks and talk about important aspects of trial and error.

 

4 Goals for Thoughtful “Trial and Error”

 

I’m going to keep the shower faucet analogy going because it’s straightforward to imagine hitting the goals for each dimension.  But to give this a fuller scientific discovery context I’ll add one technical example at the end of the post.

 

Dimension #1 — On putting the right number of trials into your trial and error.

 

Goal:

Keep running trials until you gain at least one valued action-outcome insight.

 

When you start out on a round of trial and error you are really aiming for complete understanding and the skill to make it happen on demand, with fine control.

In our shower analogy, that means it’s not just enough to know how to get water to come out of the spout.  You need to be able to control the water temperature, the water pressure, and make sure it comes out of the shower head and not the tub spout (if there is one).  Ideally, you’d learn enough to be able to manipulate the handles to produce a range of outcomes:  the temperature sweet spot for a summer day shower or a winter one; the right pressure for too much soap with soft water or for sore skin from the flu.

So one of the first things you have to figure out is: how do you know when to stop making trials?

This isn’t a technical post about conducting blind trials or sample surveys.  Here we’re talking about a more qualitative definition of done; the kind of thing you might try for an “exploratory study”.  Exploratory studies are the kind where you have no hypothesis going in.  Instead, you’re trying to find your way toward an unknown valued insight, not trying to prove or disprove a previous hypothetical insight.

The whole point of trial and error is to take a bunch of actions that will teach you how to create desired results by showing you what works (called “fits”), what doesn’t work (called “misfits”), and forcing you to learn why.

The “why” is the valued insight you’re after.

If you’ve run enough trials to figure out how to make something happen, that’s good, but not enough.  For scientific discovery you need to know precisely why and precisely how it works.

So keep running trials until you’ve come up with an answer to at least one why question.

 

Dimension #2 — On putting the right kinds of trials into your trial and error.

 

Goal:

Try a mixture of fits and misfits.

 

A key facet of trial and error is that by intentionally generating mistakes it will help create insight into how to generate success.

Partly, these trials are about firsthand experience.  Your job is to move from “wrong-headed” ideas to “right-tried” experiences.  To make changes to how you operate you have to clearly label and identify two things in your trial and error scenario–“actions I can take” and “results I want to control”.

Good trial and error means that you will: (1) learn the range of actions allowed; (2) try every possible major action to confirm what’s possible and what’s not; and (3) learn from experience which actions produce what outcomes.

In the last section I brought up the terms fit and misfit: in some science work, getting a match between an equation you are trying and the data is called a “fit” and getting a mismatch between the two is called a “misfit”.

So in science terms, that means you want your trials to be a mixture of things you learn will work (fits), things you learn won’t work (misfits), and, if possible, things where you have no idea what will happen (surprises).

For my shower analogy, let’s use a concrete example: the shower in my second bathroom, which both my mom and aunt have had to use (and, rightfully, complained about).

A photo of the handles that control the shower in my guest bathroom in my UK apartment.

So, for “actions I can take”: rotate left handle, rotate right handle, or pull the lever on the left handle.  And for “results I want to control”: the water temperature and the amount of water coming out of the shower head.

Then, I start moving handles and levers individually.  Every time I move a handle and don’t get the outcome I want, it’s a mistake.  But I’m doing it intentionally, so that I can learn what all the levers do.

Many of these attempts will be misfits, producing no shower at all or cold water or whatever.  Some may accidentally be fits.  Hopefully, none will produce surprises (though I have had brown water and sludge come out of faucets before).

I think this visceral experience is what allows your mind to stop rationalizing why standard approaches and methods should work and get on with seriously seeking out new and novel alternatives that actually work.

And these new and novel alternatives, with their associated insights, are the soul of scientific discovery.

So you want to move into this open-minded, curious, active participant and observer state as quickly as possible and trying fits and misfits will help you do that.

 

Dimension #3 — On putting the right kind of error into your trial and error.

 

Goal:

Make both extreme and incremental mistakes.

You know the actions you can take.  But you need to figure out why certain actions lead to certain results.

One great way to do this is to try the extreme of each action.

If it’s safe (or you have a reasonable expectation of safety) then pull the lever to the max, rotate the faucet handle all the way, cut out almost everything you thought was necessary, and see what happens.

In physics, this goes by the name “easy cases”.  What we really mean is use the extreme values, zero, negative infinity, or positive infinity.  Plug them in to your model and see what happens.  Does it break things?  Does it give wonky answers?  Does it lead to a scenario where the role of one term in the equation becomes clearer?

That’s the beauty of extreme tests when you’re doing trial and error.  They let you crank up the volume on factors so that you can pinpoint what they might do, how they might operate in your context.

So what about making “incremental” mistakes?  Just nudging things a little this way and a little that way to see what happens?

These are absolutely necessary too, and tend to happen later on in your trial and error process.  They are a great way to confirm and refine your understanding.

If you want to boil it down, making mistakes at the extreme ends of the action cycle hones your “this-does-that” knowledge, while making mistakes in small incremental steps helps clarify “how” knowledge.

So, often times, it’s best to go after extreme cases in the early trials and then move toward incremental cases later on.  For example, with the shower handles, early on you’ll probably try rotating one handle all the way to the right or left to figure out which direction brings hot water.  Later on, you’ll turn the handle a little bit at a time, until you get the right temperature.

 

Dimension #4 — On putting the right amount of error into your trial and error.

 

Goal:

Make mistakes until you can link all major actions with outcomes.

 

This one is easy enough to grasp.  To put it more bluntly: how many times should you mess up on purpose?

The goal statement says it all: make enough mistakes that you can link all major actions with outcomes in your mind, and you know why they are linked the way they are.

Just imagine if you were told that every move you made to try and set a shower, where you didn’t know the knobs at all, had to only be moving toward the right outcome (no errors allowed).  How the heck would you succeed?  You would have to look up a manual, or find someone who had used the shower before.  It would probably slow the process down to a painstaking pace.  It would stress you out.  And it would need pre-existing insight into how to do it right.

But in discovery, you won’t have that kind of prior insight.  No one does.  So you have to be willing to gets things wrong in order to start to generate that insight.

So keep getting it wrong in your trials until you really get why it doesn’t work.  Don’t avoid those misfit moments.  You should be able to make a table or a mind map of links between actions and outcomes.  If you can’t, keep making errors until you can.

 

The Four Trial and Error Dimensions in a Real Physics Research Example

 

I promised I would connect the ideas I’ve talked about to a science example, so let me do that:

For my Ph.D. neutrino physics work, at one point I had to write a piece of computer code that could reproduce a final plot and numbers in an already published paper, by the MINOS neutrino oscillation experiment, to make sure our code modeled the experiment well.  First, I wrote some code (to estimate the total number of neutrino particles we predicted this experiment to see at a certain energies) based on how my research group had always done it.  Then I wrote down in my research notebook how the existing code had previously been tweaked to produce a good match.  One value had been hand-set, by trial and error, to fit.

In the newer data published at the time, we knew this tweak no longer worked.  But at first I just tried it anyway (try misfits).  Then I started changing the values in the code (make incremental changes).  And we added a few new parameters that we could adjust and I altered those values (try unknowns).  I kept detailed hand lists of the results of my changes on the final output numbers (link actions to outcomes).

Then I synthesized these behaviors into new groupings: did it make the results too big, too small, by a little, by a lot?  Did it skew all the results or just the results at certain energies?  Was it a consistent overall effect, or some weird pattern effect?

At this point I kept many code versions to be able to have a record of the progression of my trials (fancy versioning software isn’t commonly used in small physics groups).

A screenshot showing some of the folders and files from my Ph.D. computer codes that required trial and error.

And I did handwritten notes where I worked through why certain outcomes weren’t produced and others were (try until you get insight).

 

Then I did it again.  And again.  And we did it for 10 more experiments totaling…well, a LOT of code.

In the end we got a good match and we were able to use it to complete my Ph.D. work, which explored the impact of a mathematical symmetry on our current picture of the neutrino particle.

So, trial and error, being able to willfully make mistakes to gain insight, can be incredibly powerful and remains a uniquely human skill.

As a 2011 study from Nature suggested, non-expert video gamers (i.e., many with no education in the topic beyond high school level biology) out-predicted a world-leading machine algorithm, designed by expert academic biochemists and computer scientists, in coming up with correct 3-D protein shapes, because they made mistakes on purpose while generating intermediate trial solutions.

Algorithms, by design, are constrained to do only one thing: get a better answer than they had before.  Every step must be forward; even temporary small failures are not allowed.

But we’re messy humans.

We can take two steps back for every one step forward, or even cartwheel off to the side when the rules say only walking is allowed.  Our ability to strategically move in “the wrong direction” (briefly taking us farther away from a goal) in order to open up options that in the long-run will move us in “the right direction” (nearer the goal) is part of our human charm and innate discovery capacity.  But that requires we acknowledge up front that in pursuit of discovery many trials will be needed, and many of them will not succeed.

 

Mantra of the Week

 

Here is this week’s one-liner; what I memorize to use as a mantra when I start to get off-track during a task that’s supposed to help me innovate, invent, and discover:

Misfits matter.

Using trial and error in a conscious, structured way can move use from having thoughts on something to experiences in something.  Notice how “thoughts on” speaks to the surface, like a tiny boat on a broad ocean; while “experiences in”, speaks to the depths, like a diver in deep water. So try.  And err.  Welcome error by remembering that misfits matter and that a deep perspective is where radical insight awaits.  In taking two steps back for every one step forward, those two steps back aren’t setbacks, they’re perspective.

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I shared the four dimensions that help define strategic trial and error: putting in the right kind and number of trials, and putting in the right kind and amount of error.
  • I shared an example of how trial and error has been used in my own physics work and in biology to get useful insights.

Have your own recipe or experiences related to trial and error?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – “Insight”, Wikipedia entry, https://en.m.wikipedia.org/wiki/Insight.
  2. Web article – Ed Yong, “Foldit – tapping the wisdom of computer gamers to solve tough scientific puzzles” Discover magazine website, Not Exactly Rocket Science Blog, August 4, 2010, http://blogs.discovermagazine.com/notrocketscience/2010/08/04/foldit-tapping-the-wisdom-of-computer-gamers-to-solve-tough-scientific-puzzles/#.XKPkLaZ7kWo.
  3. Website – MINOS neutrino oscillation experiment, http://www-numi.fnal.gov/.

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Putting the Error in Trial and Error”, The Insightful Scientist Blog, March 22, 2019, https://insightfulscientist.com/blog/2019/misfits-matter.

 

[Page Feature PhotoAn ornate faucet at the Hotel Royal in Aarhus, Denmark. Photo by Kirsten Marie Ebbesen on Unsplash.]

Awaken Sleeping Giants

Awaken Sleeping Giants

Tell me if this sounds familiar to you:

You have a lightbulb moment.

A great idea you’ve never seen or heard before.  It seems like it could really move things in an amazing new direction.  You’re excited.  No, SUPER excited.  You deluge your friends and family with all the amazing, awesome outcomes your idea could have.

Once that first flush of excitement passes and the adrenaline from having had a genius moment settles you maybe start to look around for useful info on parts of your idea outside your knowledge base.  And that’s when it happens.  You come across a paper, a talk, a website, a colleague in conversation, where they discuss something painfully close to your supposedly “novel” idea.

The idea’s already been done.

To add salt in the wound, as you dig more you find out that “the idea”, what you thought was “your idea”, was tested out by some genius years ago.  And they’ve already written about it, or tried it out, and moved on.

*sound of your ego and hope deflating here*

In my case, the “offending paper” was written before I was even born.  That’s four decades old!  I never even stood a chance of getting the first idea on that table.

So what does any of this have to do with old papers that have low citation rates?  In other words, ideas that have been out there for a while, but nobody seems to care or talk about?

 

Deciding if the Old Paper in Your Reading Pile Should Still Be There

 

Well, as a matter of fact, the paper in my example was exactly that kind of paper—it had vanished into history like an unliked and unshared tweet or Facebook post.

But if you read my Research Spotlight summary (link at the end of this post) on a Nature paper about “Team Size and ‘Disruptive’ Science” you would have learned that researchers recently discovered a link between teams that publish more “disruptive” scientific papers, patents, or computer code and the research papers they cite:  Teams proposing new ideas more often cited old unpopular papers.  By unpopular I mean those old papers weren’t cited very often, ever.

It turns out that the paper that proposed the same idea I had was an old paper (well, older than I am) and nobody seemed to cite it.  I had a good handle on just how unpopular it was because it was written by a European physicist in my own exact research field, it was published in a respectable journal, the physicist gave talks about it…  And yet I’d literally never heard of him, his work, or his contribution to this idea.

Before I read the Nature paper I mentioned before on teams and disruptive science, I assumed that this paper I found and its lack of fanfare was a bad omen:  “That means his/my idea must be a bad one.”  I had a little pity party for myself and then I tucked the PDF and my notes into a file on my laptop only to review on rare and sentimental occasions.

But in light of reading the Nature paper, I’ve completely re-evaluated my attitude and thoughts toward both the idea and my predecessor’s paper.  Instead of setting it aside, I need to re-evaluate what low citations means in this case.

And as I thought about it more and included my own experiences in publishing papers, I realized that low citation rates could have at least three meanings for a paper.  I nickname these “the niche”, “the bad”, and “the visionary.”

 

The Niche Paper

 

For niche papers the low citation rate reflects the fact that no one really cares about the paper’s content.

 

There might be a few reasons for this.  One reason reflects the content itself.  It could just be an overly specific topic (like the singing habits of mice…don’t look shocked, mice do actually “sing”), or a topic that it’s nearly impossible to research because the tools and situations don’t exist yet (like extra-dimensional theories of how neutrino particles get their mass).

The other reason reflects a failure of communication.  Maybe the authors used completely different technical jargon or math notation than anybody else has in published work.  So even if we try hard, the rest of us just might not know what the heck they’re talking about.

But there’s a third possible reason suggested by reading a paper in Technological Forecasting & Social Change, which is the focus of this week’s Research Spotlight summary (link at the end of this post).  Maybe it’s an emerging field, working right at the edge of known knowledge.  As a result, it’s living in a sweet, but difficult, spot: at discovery’s edge.  At this point in history, it falls into a niche because both of the two above reasons will trip up the paper: (1) no one will care about it because it’s not “a thing” or “trending” yet; and (2) no one will understand what it’s talking about because the focus of study is so new or under-researched that many ideas, concepts and words will have to be invented to talk about it.

And by the way, don’t assume that “emerging” just applies to stuff in the last 5 years.  Sometimes emerging science takes decades to incubate, with just a few researchers keeping the embers alive, before it really takes off and becomes a new field of study in its own right.

Of course only the first kind of niche paper (the too specific) and the third kind (the emerging field) are potentially useful for breakthrough science, innovations, or inventions.  The second kind (the Greek-speak) just needs a good re-write.

 

The Bad Paper

 

For bad papers the low citation rate reflects the fact that the work it describes just wasn’t that good.

 

There are lots and lots of reasons, big and small, why a paper might be bad.  You could write volumes about this topic and, unfortunately, find lots of real examples to illustrate what you mean.  In fact, right now I bet you can picture an example you thought was junk work and that you still wonder to yourself, “How did that get (published/funded/awarded/bought/greenlit)?”

I have no desire to make this post a laundry list of complaints against certain papers I’ve seen (I have no patience with pessimism or destructive criticism).  The point here at The Insightful Scientist is to make progress toward scientific discovery and insight by finding fresh, valuable ways to move forward.  Not wallow and howl at the bad stuff people sometimes produce.

So let me stick to what you need to do here: recognize when a paper is “bad” so you can move on from it quickly.

Right now, I’ll just point out two reasons that are big red flags that you should avoid using a  paper at all, even to inform your own thinking, let alone to cite in one of your own writings.

First, if a paper uses inconsistent logic to either (1) justify its own findings or (2) compare itself to the works of others then you should consider it a “bad” paper and avoid it.  You don’t want that bad mental habit to rub off on you or to have your credibility tainted by association (you’ll need that credibility later on when you want to encourage a broader community to engage with your ideas).

Second, if a paper does not give sufficient information to evaluate its methods or conclusions then you should consider it a “bad paper” and leave it out of your information pile.  Again, it’s a bad habit, not laying out fully and clearly in writing what makes your work tick.  So do yourself a favor and find a better paper.  [The exception here is in sharing information about a patent or potentially patentable invention, where sharing too much detail could lead to problems in market competition.  But the answer is simple: if you publish you have an obligation to share.  The purpose of making something public by writing about it is to expand the public knowledge domain.  If you don’t want to share, don’t publish.]

What I like about using these two red flags, to seek and ignore bad papers that have wandered into your information orbit, is that you can check for them even if the paper is well outside your area of expertise.

And if a radical breakthrough is your goal, you should be reading outside your expertise.

I’ve been reading in sociology, biochemistry, and library sciences to try and answer a neutrino physics question (those other fields help improve my skill set ,which makes me more adept at tackling my own field).  Research suggests that this kind of intentional, broad information gathering can trigger radical insight.

Do what it takes to get the job done.  Read widely, and filter out bad papers as you find them.

 

The Visionary Paper

 

For visionary papers the low citation rate reflects the fact that the ideas presented are too far ahead of their time for others to recognize or act on yet.

 

I know, I know.  All you futurists, innovators, scientists, inventors, and entrepreneurs out there (myself included) are drooling over this category.

Visionary.

The word just smells of greatness, and we all want to make a contribution that will make it into this category.  So it’s only natural to get a little over-excited and want to label a paper related to your own “big dream” science or innovation as “visionary”.  It gives us a feel-good moment and a sense of fate, an image of what our own future might look like.

But if you remember my story from the beginning of this post, that kind of warm-and-fuzzy meets adrenaline-pumping moment is what got us into this awkward mess, sorting papers into categories, in the first place.  So here we are trying to be mature about this low citation paper and figure out what it means that someone else already came up with it, but no one paid attention.

On The Insightful Scientist I have made it my mission to learn how to be a pro at scientific discovery and share that with others.  So let’s get objective.  How can we tell if the ideas are ahead of their time?

I’ll assume that the paper has avoided any of the red flags that would make it a bad paper to rely on. (If you’re avoiding that evaluation because you’re afraid to see that paper not make the cut, have courage and be decisive.  If the paper is “bad,” it’s bad for your long-term discovery goals.)

As you evaluate the paper, remember that you’re at an advantage because you’re a “future human” 5, 10, 20, 40, even 100 years after the paper was written.  You know how some aspects of the “story” (i.e., the science) actually turned out and you can use that to help you evaluate.

Did this old paper have the right mindset—is it logically consistent, does it emphasize objectivity and evidence, and does it share information willingly?  Did other ideas presented in the paper turn out to be true or stand the test of time?  Did the paper get those ideas right, even though they were based on some false assumptions?  Are those false assumptions of the “past humans” who wrote the paper mostly a result of not having access to the data, technology, populations, or even big pots of money like we future humans have now?

What you’re really trying to figure out is if the authors had good research instincts (due to experience, mindset, or both), even in the face of limited resources.  If they did, then it’s possible they had honed their visionary skills about the topic and you might be looking at a visionary paper.  It may have provided a past blueprint for a good idea that the future can now act on.  If you want some examples of papers in this category, check out the link toward the end of this post.

And if your final decision is that the low citation paper you’ve got is visionary…build on it!

 

Learning to Sort Papers Like a Pro

 

If you remember, at the beginning of this post, I said this whole stream of thought came about because I had a low citation paper sitting in a neglected folder.  I’d originally, purely based on citation rate, dismissed it as “bad”.  But upon re-evaluating it I’ve decided it is  somewhere between niche and visionary.  I’m still working out which category I think it fits in best.

But the important point is that I’ve re-engaged with the paper and I’m wrestling with the science, ideas, and methods it presents in a much more thoughtful way.  I’m not falling in love with it (like a novice might) and I’m not dismissing it out of hand either (like an old-hand might).  I’m handling it like a pro who knows that when it comes to pursuing scientific discovery with deliberate skill, learning to distinguish between the niche, the bad, and the visionary is part of your job description.

 

Mantra of the Week

 

On a final note, before I sum this post up in a short bullet list, let me say this:

If you’ve read some of my past posts from 2018, especially the old versions, then you know I sometimes like to end with an artsy, one sentence tagline, and I use the post feature photo to illustrate it.

These one-liners are what I memorize to use as mantras when I start to get off-track during a task that’s supposed to help me innovate, invent, or discover.

This week’s one-liner is:

Awaken sleeping giants.

If you want to change the knowledge landscape then sometimes you have to dig into the past to find ideas that are sleeping giants.  Once awakened, the rumble and weight of their presence will cause heaven and earth to stand-up and take notice.  And as physicist Isaac Newton once supposedly said, “If I have seen further, it is by standing on the shoulders of giants.”

 

Final Thoughts

 

So let’s recap the ideas and examples I’ve talked about in this post:

  • I suggested a way to sort old unpopular papers in your information pile into three categories: the niche, the bad, and the visionary.
  • I pointed out why you should throw out papers falling into the bad category and consider building on papers in the niche and visionary categories.
  • I talked about how each of these categories of papers fit into the big picture of the pursuit of scientific discovery.

Do you have your own sorting and sifting criteria for papers?  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Web Article – Carl Zimmer, “These Mice Sing to One Another — Politely,” The New York Times, February 28, 2019, https://www.nytimes.com/2019/02/28/science/mice-singing-language-brain.html.
  2. Web Article – “Like Sleeping Beauty, some research lies dormant for decades, study finds”, Phys.org website, May 25, 2015, https://phys.org/news/2015-05-beauty-lies-dormant-decades.html.

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

Research Spotlight Summaries:

 

How-To Articles:

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “Low Citation Papers: The Niche, the Bad, and the Visionary”, The Insightful Scientist Blog, March 15, 2019, https://insightfulscientist.com/blog/2019/awaken-sleeping-giants.

 

[Page Feature PhotoStanding figure and reclining Buddha at the Gal Vihara site in Sri Lanka.  Photo by Eddy Billard on Unsplash.]

Three Keys

Three Keys

It’s that time of year when you suddenly realize that any goals, plans, or New Year’s Resolutions you have for 2019 already seem like a bad idea.  I certainly have.

I’ve been sick, I’m in the midst of a significant professional transition, and I still can’t even find the notebook where in December 2018 I wrote down all the wonderful things I hoped to make happen in 2019.   I only remember off the top of my head two goals: (1) eat more nutritious food daily and (2) practice my scientific discovery skills daily.  The food goal has been easier to do.  In fact, at least I’ve started on it!  But the discovery goal has only seen me spend two dedicated hours since 2019 started practicing my ability to invent new theoretical equations.

Why are some goals and habits so much easier to follow through on?  In particular for The Insightful Scientist why are some habits, like making progress on a big discovery goal, so hard to practice?  I think that some of the same basic things that hold us back from finishing fitness, diet, hobby, money or other personal goals also plague our ability to act on discovery goals.  So let’s talk about ways to fix our discovery habits and make 2019 a better discovery year.

 

3 Keys to Good Discovery Resolutions

 

You’ve probably heard these suggestions before, but let me remind you of three things that you should have in place to increase the chances that you’ll follow through on a goal.

In this case, I’m going to use my own attempts over the last three months to come up with a 30-day discovery skills mini-workout for myself as an example.  I always remind myself of best practices to develop new habits (and make room for them in my mind and life) by reading a post from Leo Babauta’s wonderful Zen Habits site (I’ve been a fan since 2010).  I’ll condense lots of Leo’s advice into a list of three keys for successful goals and habits:

 

  • Have a well-defined goal, so you know when you’ve succeeded.
  • Have a clear picture of concrete actions to take to achieve the goal, so you’ll act.
  • Have a way to monitor your progress towards your goal, so you’ll adapt and stick with it.

 

Let me break this general advice down and translate it into my specific example to give you one idea of how you might make these keys work for you and your scientific discovery goals.

 

1. Define Your Goal

 

Original Poorly Defined Goal:

Spend 30 days of daily practice focused on improving my scientific discovery skill set.

That was the original goal statement I had in mind in January.  Is it “well-defined”?  No.

You can tell if your goal definition is well-defined by how long it takes you to try doing your first day on a program, once you’ve fully committed to starting it as soon as possible.  For me it took me 19 days (I use a Bullet Journal to loosely track events, so that’s how I know) before I sat down and tried it.  That’s a nineteen-day delay after I said, “I’ll absolutely, whole-heartedly start this tomorrow”.

As a general guideline, I have since set seven days as the maximum time from firmly committing to start now and actually starting.  If it takes me longer than that, odds are good I don’t have a clear enough goal in mind, so I procrastinate.

For this example, I was looking for a discovery skill set goal, not a discovery project goal.  At the end of this post, I’ll come back and briefly talk about applying this idea to a topic-specific scientific discovery research project.  But for now, we’re talking about skills.

To come up with a better goal and hit the refresh button on starting my program, I did a few things.  I freed up space.  Literally.  I did a Getting Things Done mind sweep and emptied one room in my house of major distractions (pictures, books, papers, decorative objects).  Then I spent time in my de-cluttered space and asked myself the same question over and over again: what exactly do I hope to be able to do at the end of my 30-day discovery skills project that I cannot do right now?

For me, as a theoretical physicist, a key skill is to be able to generate an equation that represents a new physical idea.  In fact, generating equations is a key part of scientific discovery for many scientists.  So, that’s the skill I wanted to focus on first.  After two weeks of concentrated thinking, I came up with a working solution:  A daily practice I call “creative math”.  I can hear mathematicians groaning already, but physicists are notoriously more irreverent toward math—we’ll happily build the Lego equivalent of a Bugatti so long as it mostly gets the job done.

So, let me re-define my goal now using creative math.

 

Final Well-Defined Goal:

Over a 30-day period engage in at least 30 creative math sessions total, lasting no less than 20 minutes and no more than 1 hour each, with a minimum of 1 practice session a day, excluding Sundays.

 

Now we’re clearer.  I could make a simple tracker (maybe something Bullet Journal style, or using a phone app like Loop Habit Tracker or Habit Bull, or even use a simple mark on a calendar) and just check off as I succeed at completing each session.  And I can (and have) put two timers on my phone, one labelled “CMath 20” the other “CMath 60” to keep me on track during sessions (kitchen timers, Time timers, and Pomodoro apps have also worked well for me).

That’s one key in place to jump start my discovery skills program.  Two to go.  So what do I mean by “creative math” anyway?

 

2.  Define Concrete Actions

 

Original Poorly Planned Actions:

Spend at least 1 hour of butt-in-chair time practicing a discovery skill.

First, I should explain my quirky phrase “butt-in-chair time”.

I use this to specify what I mean by having actually tried on intellectual tasks.  For fitness goals, defining “try” and “effort” is easier: do so many reps, walk or run so far, lift a certain amount of weight, etc.  But how do we define a good level of try for intellectual tasks?  I define butt-in-chair time as the hours or minutes spent actively hand writing or typing up material directly relevant to producing the task outcome.  If you do more work standing up (whiteboard, or machine shop bench anyone?) then you might think of another phrase (Hand-on-board time? Powered-up-tool time?).

These sessions don’t have to be continuous, but the minutes have to add up to the target total.  If I just sit there thinking, having an internal conversation, checking email, WhatsApp or whatever, that time doesn’t count.  But if I’m writing in a notebook at a coffee shop (like right now), at my desk, on the tram, seated at a bus stop, on an airplane…you get the idea.  All of that time counts.  The total estimate won’t be perfect, but it does make me more honest about “Did I actually try?  Or did I just pretend to try?”

From the newly defined goal, I’ve set the activity for this butt-in-chair time as “creative math.”  The goal of the session is to generate an equation that represents a physical situation.  Over the course of the 30 days I should be able to see improvement (or lack thereof) in my ability to invent these equations.

First, I needed to devise a layout for this creative math.  I knew the final session “outputs” (the physical artifacts demonstrating that I had actually completed a session) needed to be pen-and-paper pages (these are easiest to buy everywhere and use everywhere; so no excuses for not completing a session).

Also, not doing it in a digital format (i.e., in an app or using software) helped with another aspect: I wanted to practice using internal conceptual resources, not pulling from external sources.  So, no textbooks, guides, internet searches, or even my own research notes, allowed during the session.  If I could learn to be competent without those tools, I could become more masterful with those tools.

This just left the overall format for my pen-and-paper pages.  At the time I was learning about Mike Rohde’s “sketchnotes” system as part of my on-going research.  So, I adapted his sketchnote task to my idea.  A sketchnote is traditionally a one-page sheet of handwritten and drawn notes, taken down during a talk or lecture, and designed to capture just the essential points, using descriptive doodles and hand drawn fonts.

I adapted this to come up with a creative math template using a one-page style with a central box, which emphasizes that I am looking for an equation, and a doodle and fancy typeface statement outlining the physical situation I want to describe with my equation.  Then I spend 20 minutes to 1 hour filling up the front side of the sheet of paper with keywords, questions, and phrases affecting the physical situation, which I then immediately put into a math form.  Toward the end of my session I combine all the math forms I’ve got into one final equation which counts as my “answer”.

My only other rule is that I avoid using common notation for any of my math.  I do that to avoid (1) biasing myself toward what I think the final answer “should” look like (this also slows me down and makes me more mindful of what I’m doing) and (2) cheating by using equations I already know from memory.

 

My first attempt at creative math. [Photo by B. K. Cogswell.]
You might wonder why I go to the effort of avoiding using things I’ve already learned when working a creative math practice session.  The reason will become clear when I discuss the third key to developing a solid scientific discovery skills practice program in the next section.

Before I close out this section, let me pull it all together and write down a new and improved concrete actions statement:

 

Final Well-Planned Actions:

Spend at least 20 minutes a day minimum, 1 hour maximum, of focused butt-in-chair time producing one page of creative math, at least six days a week.  A completed creative math page includes a statement of the specific physical situation being modeled, a doodle of that situation, and a final guess at one equation that describes an aspect of the situation.

 

Do you see how I keep moving from a generic desire to a specific intent of when and how to act and what specifically to do?  That mental transition is what you’re after before you start your own scientific discovery program.

Now we just need one more piece to have a solid plan we can start and finish:  we need some way to monitor our progress.

 

3.  Monitor Your Progress

 

Original Poor Tracking Idea:

Make daily practice pen-and-paper handwritten sheets and put them in a binder to get a portfolio of practice pieces.

 

Following on the sketchnoting theme and sticking to pen-and-paper, I initially planned to monitor my progress in a very visual and physically tangible way: I was going to make a pile of “stuff”.  The bigger the pile, the more practice I had under my belt!  That pile was going to be handwritten pages representing multiple attempts.  Like art students who have hundreds of practice sketches tucked into a portfolio, I would have creative math pages tucked into a binder.

This was a pretty solid first thought, but it did not get at the heart of my discovery practice goal.  Monitoring pages evaluates my level of consistency and the accumulation of practice hours.  Good information, but not the most important thing.

The most important thing to monitor is: are the invented equations I’m coming up with getting better over time?

To answer this, I had to come up with a simple but more sophisticated way to think about the equations I was creating.

First, I broke the equations into two elements: ingredients and connections.  Ingredients are math variables like mass, density, temperature, etc.  Connections are math operations like subtraction, powers, derivatives, etc.  I then developed a new template to go on the back side of a creative math page.  On it I list and count the number of ingredients and connections in my answer.  Then I look up the actual answer (in papers, sites, textbooks, etc.) and list and count its ingredients and connections.  Then I check to see how much I got right!  The goal is to get all ingredients listed with connections in the proper order.  Only pieces I get right count in my percent correct.

 

My second attempt at creative math and my improved way to monitor my progress. [Photos by B. K. Cogswell.]

This brings me back to why I use unusual notation.  I did two practice sessions in January, full of enthusiasm, and in my Bullet Journal started a collection called “Creative Math Ideas”, so I would have a stockpile of physical situations to use each day during practice.

 

My “Creative Math Ideas” bullet journal collection. [Photo by B. K. Cogswell.]

It turns out my enthusiasm was a case of running before I could walk.  They were all good questions, but many of the topics I initially picked did not have easy-to-find answers (the papers were too niche, or science didn’t have a clear answer yet).

To get around this I realized I needed to start out with simpler examples where I had already seen an answer, so I knew one existed.  But I didn’t want to cheat and use memory.  After all, at some point, even the simplest problems were all scientific discoveries.  Two hundred years ago, the vast majority of science known today hadn’t been discovered yet.

So even simple problems are good practice for discovering so long as you actually try to discover them for yourself.

By using non-standard notation and relying on personal experience rather than textbook knowledge, you can treat these problems as creative math candidates.

 

Schaum’s books I am using as a source of simple problems for my creative math practice. [Photo by B. K. Cogswell.]

Final Good Tracking Idea:

Estimate at the end of every creative math session how good my invented equation is, by writing down the percentage of ingredients and connections I got right, checked against a known correct math equation for that physical situation.

 

And that’s the final piece in place for a solid 30-day program to improve one of my scientific discovery skills.  I started it eight days ago and so far so good!

 

Trying the 3 Keys with a Discovery Research Project Instead of a Discovery Skills Project

 

You may like this idea of a discovery skills mini “boot camp” that you could do as a yearly goal or refresher.  But what if you wanted to adapt it to a project rather than a skill set?

I’ve dropped some hints along the way as to how this might change.  The three keys must still be met.  But for the first key your goal would be a physical output rather than an ability improvement.  For the second key you would tailor your butt-in-chair time to whatever output you need.  If it’s code it would be coding sessions, a physical prototype then building and modifying parts or refining a  schematic, and so on.

Unlike in my example, which had different problems each session, in each session you would now work on the same problem with one new variation.  If you’re doing equations then session 1 would have version 1 of that equation, session 2 would have version 2 of that same equation, and so on.  You might come up with variations by emphasizing an aspect that’s a strong physical limitation or by emphasizing a failed aspect of the previous version.

Finally, for the third key you would monitor progress by evaluating at the end of every session how well that version fits the solution criteria you need.  Is it cheap enough?  The right size, shape, or speed?  Does it explain the unexplained part?  Does it create the graph features you want to match?

If you spend a little time in the beginning thinking it through, you can come up with a 30-day kickstart to get you putting in meaningful time trying to discover what matters to you.

 

Final Thoughts

 

That’s my creative math practice in a nutshell and how I used three keys of good goal and habit setting to come up with it.  This creative math practice is how I got back on track with my New Year’s Resolution to practice my scientific discovery skills and become a better discoverer daily.  I’m running my 30-day creative math practice program right now and it’s already helped me notice some new avenues to explore in my physics research.

So let’s recap the ideas and examples I’ve talked about in this post:

  • I covered an example of how to define a 30-day practice program to improve your skills at inventing equations describing physical situations off the top of your head.
  • I discussed the three keys to creating a good practice program: (1) define a clear and specific program goal; (2) define concrete steps to take on a regular schedule; and (3) define a way to monitor your progress toward your goal.
  • I pointed out ways you might adapt my example scientific discovery skills program into a discovery research program by using the “butt-in-chair time” idea to produce new ideas on a regular schedule.

If you’ve got your own practice techniques I’d love to hear about them.  Or if you try out the program I’ve shared here I’d like to know how the experience goes.  And if it helps inspire you to a breakthrough let me know!  You can share your thoughts by posting a comment below.

 

Interesting Stuff Related to This Post

 

  1. Blog Post – Leo Babauta, “Set Powerful Deadlines,” April 26, 2016, https://zenhabits.net/deadlines/.
  2. Web Article – Andrew Krok, “Life-size Lego Bugatti actually works, has over 1 million pieces: It gets its power from 2,304 Lego electric motors”, Roadshow Reviews by CNET, August 30, 2018, https://www.cnet.com/roadshow/news/lego-bugatti-chiron-life-size/.
  3. Blog Post – Mike Rohde, “Ideas not Art – Students learn how to use sketchnotes to improve their notetaking in lectures”, December 31, 2018, https://sketchnotearmy.com/blog/2018/12/31/ideas-not-art-students-learn-how-to-use-sketchnotes-to-improve-their-note-taking-in-lectures-1.
  4. YouTube Video – Dr. Ellie Mackin Roberts, “Research #BulletJournaling”, December 7, 2016, http://www.elliemackin.net/blog/category/bullet-journal.

 

How to cite this post:

 

Bernadette K. Cogswell, “Three Keys to Creating a Discovery Skills Practice Program”, The Insightful Scientist Blog, March 8, 2019, https://insightfulscientist.com/blog/2019/three-keys.

 

[Page Feature Photo:  Keys in an equipment room in China.  Photo by Chunlea Ju on Unsplash.]

Dancing with Discovery

Dancing with Discovery

Putting things into categories is helpful.  Sometimes it lets you recognize shared commonalities between things that you didn’t notice before.  Other times it gives you a mental shortcut to know how to interact with something—once you know its category, you’re more likely to know what it’s for and what to do with it.

In the first Insight Exchange I recently hosted at my home institution I structured the process of scientific discovery into a five-phase cycle and I also structured the types of scientific discovery into four categories.  The purpose of this “typology” of scientific discovery was to help guide the conversation in the group.  I also had two hunches: (1) that scientists pursuing similar types of discoveries, even if they are from different fields, will share similar challenges and setbacks; and (2) that each category of scientific discovery has a set of associated strategies uniquely suited to making progress on that kind of discovery.

This idea of discovery categories and associated strategies is a keystone of my goal to build software that helps promotes scientific discovery.  As I work on finalizing a first evolution of a territory map of scientific discovery and strategies (to be released under Spark Points sometime later this year) I keep mulling over the questions: What distinguishes the types of scientific discoveries? And what strategies are most useful for what types of scientific discoveries?

As always, I’m looking for ways to answer these questions that work across a broad range of fields, not just physics.  For the Insight Exchange I used a four-category breakdown of types of scientific discoveries: object, attribute, mechanism, technique.  Each of these are labelled by the primary type of knowledge being sought, as described in the little list below.

 

CATEGORIES OF DISCOVERY

  • OBJECT
    • new object
  • ATTRIBUTE
    • new property of a known object
  • MECHANISM
    • new behavior or phenomenon, or explanation of a known behavior or phenomenon
  • TECHNIQUE
    • new tool or method to generate a known object, attribute, or mechanism

 

For example, in my own field of neutrino physics open questions related to each category would be:

 

EXAMPLES OF CATEGORIES OF DISCOVERY IN NEUTRINO PHYSICS

  • OBJECT –
    • do additional neutrinos exist beyond the three known standard model (SM) ones?
  • ATTRIBUTE –
    • does the neutrino have a non-zero magnetic moment?
  • MECHANISM –
    • what is the origin of neutrino mass?
  • TECHNIQUE –
    • how can you develop a detector capable of observing beyond the standard model (BSM) physics using coherent elastic neutrino nucleus scattering (CEvNS)?

 

So, in this classification of scientific discovery, it’s a little like playing a professional version of the childhood question game “Animal, Vegetable, or Mineral?”.

The group in the first Insight Exchange did not seem to take issue with my category labels too much, but many people felt their personal scientific discovery goal did not cleanly fit into one category and listed it as belonging to multiple categories.  So as a workshop strategy, this typology didn’t work out too well (since I had planned to put people into small teams grouped by their category, with the thought that they might share more of the same challenges and, therefore, be better positioned to offer each other feedback).  Best laid plans.  Instead, I ended up assigning teams completely differently (in such a way as to ensure a good diversity of scientific fields and career stages within each team).

So, I’ve gone back to the drawing board a little to keep thinking about this idea of types of scientific discovery.  So far, I’ve been struggling to find the words to yield the right material when doing a literature review search:  Is it “categories” of scientific discovery?  Is it “types”? A “typology of scientific discovery”?  Or maybe “the classification of scientific discoveries”?

My search remains unsuccessful to some degree, but I did find two very short writings that attempt to do the same thing.  The first is an editorial in Science magazine by a former editor of that publication, Daniel E. Koshland Jr. a professor of biochemistry and molecular and cell biology, entitled “The Cha-Cha-Cha Theory of Scientific Discovery”.  Koshland’s theory is that historical patterns of scientific discovery (and non-scientific discoveries) suggest that discovery can be divided into three categories: charge, challenge, and chance.

In Koshland’s theory charge discoveries are about finding a solution to a well-known problem.  In the charge type of discovery, the discoverer’s primary role is to view the same data and context already well-known to all, but to come to some novel conclusion by perceiving that collection of facts in a way no other researcher has.  In the challenge type of discovery, the discoverer’s primary role is to bring cohesion and consistency to a body of well-known facts and/or anomalies that are in tension or lack a unifying conceptual framework.  Lastly, in the chance type of discovery, the discoverer’s role is to perceive and explain the central importance of a known or recently observed fact obtained by accident.  In his editorial, Koshland gives numerous examples of each type of discovery from fields as diverse as chemistry, physics, and biology.

More importantly, he extends his category theory to note an additional pattern:

“…the original contribution of the discoverer can be applied at different points in the solution of a problem.  In the Charge category, originality lies in the devising of a solution, not in the perception of the problem.  In the Challenge category, the originality is in perceiving the anomalies and their importance and devising a new concept that explains them.  In the Chance category, the original contribution is the perception of the importance of the accident and articulating the phenomenon on which it throws light.”

[D. E. Koshland, Science, vol. 137, p. 761 (2007)]

Before I do a little comparison of these different ways to categorize scientific discovery, let me also throw another article into the mix.  Keiichi Noe, a professor of philosophy at Tohoku University, wrote a contribution, entitled “The Structure of Scientific Discovery: From a Philosophical Point of View”, to a book on discovery science.  The actual focus of Noe’s paper is on the mental process by which discovery is achieved and how this might be translated into a computational algorithm.  But to elucidate such strategies, Noe first defines two types of discovery.

For Noe, one type of scientific discovery is “factual discovery”, the “discovery of a new fact guided by an established theory” (p.33).  In contrast, the second type of discovery is a “conceptual discovery”, “which [proposes] systematic explanations of…phenomena by reinterpreting pre-existing facts and laws from a new point of view” (p.33).  In Noe’s framework, the significance of these distinctions is that the scientist must bring a different kind of thought process, in particular a different implementation of the imagination, to the pursuit of each kind of discovery.  For factual discovery what Noe calls a “metonymical imagination” is needed; whereas, for conceptual discovery a “metaphorical imagination” is needed.

In the case of the discovery of new facts, the metonymical imagination refers to a way of thinking in which newly discovered items that are closely related, as seen through the lens of existing theory, are grouped together.  As Noe puts it these “discoveries…complete an unfinished theory” (p. 37).  In contrast, in the case of the discovery of essentially new theory, the metaphorical imagination refers to a way of thinking in which hidden or implied links are created between unrelated items that share common characteristics.  In these discoveries “[a change of] viewpoint from explicit facts to implicit unknown relations [occurs]” (p.37).

If we use the five-phase discovery cycle (question-ideation-articulation-evaluation-verification) as a common grounding point, then these three different typologies of scientific discovery—my quartet, Koshland’s trio, and Noe’s duo—each represent a different way of thinking about the discovery cycle.  For me, the discovery classifications emphasize the type of output desired by the discoverer at the end of the discovery cycle (i.e., after successful verification)—is it an object, a description, an explanation, or a method.  For Koshland the emphasis is on the point at which the discoverer must innovate within the discovery cycle in order to discover something new—either ideation (charge and challenge discoveries) or articulation (chance discoveries).  For Noe, the emphasis is on the overarching viewpoint and mindset that the discoverer applies in moving through the entire cycle—do you use the prevailing view or replace it.

It’s also easy to see that depending on which typology of scientific discovery you use, you will also perceive different strategies and techniques as more useful.  Within my typology, mathematical and logical strategies are better suited to mechanism discoveries, building and prototyping strategies to techniques, and a mix of both to object and attribute discoveries.  For Koshland, strategies that boost ideation or streamline articulation will simultaneously advance discovery.  Noe has explicitly defined useful tactics, the metonymical imagination for factual discovery and metaphorical imagination for conceptual discovery.

Which brings me to the end of my musings for this week.  Some weeks, coming up with an image that sums up my new perspective on scientific discovery is incredibly challenging.  But this week Koshland has made it easy for me:

If scientific discovery is a kind of dance, wherein the dancers become more skilled and graceful with time, producing ever more intricate choreographies of knowledge, then typologies of scientific discovery are merely styles of dance that one can practice.  For me it’s a kind of folksy American square dance or mannered English quadrille, for Koshland a vibrant Cuban cha-cha, and for Noe a delicate French pas de deux from ballet.  But whatever your style for dancing with discovery, knowing the kind of dance you’re in just might help you improve your moves.

Seek An Improbable Partner

Seek An Improbable Partner

In December I plan to post a series of log entries dedicated to the use of analogies.  These posts will be true “log” items, in the sense that they will journal my progress as I try to create a “recipe” for applying analogical thinking in a research physics context, in order to generate insight and foster scientific discovery.

In the mean time, I picked up a copy of psychologist Margaret Boden’s book, The Creative Mind:  Myths and Mechanisms, for this week’s reading, thinking that I would be covering a separate intellectual arena.  But it turns out there was an intriguing section on analogies!  Needless to say, I wasn’t about to set it aside for a month and a half.  So, unofficially, this has become the first log entry in the analogy series.  If you’ve been following the Physicist’s Log and Boden’s name sounds familiar, that’s because it is; I used her in my earlier log entry discussing the definition of scientific discovery “In the Name of Discovery”.

Boden’s book explores the mechanisms, from a cognitive psychology standpoint, that underpin our ability to think new thoughts.  She draws on a wide range of research, most prominently computational psychology (mimicking processes in the mind using computer algorithms).  The advent of computational psychology and its studies into creativity, problem solving, and discovery are fortuitous because a recipe for practice and a computational algorithm are much the same.  Both provide (1) content necessary to obtain a given outcome and (2) a sequence of implementation guiding the use of that content to obtain the outcome.  (This attention to both content and action, I mentioned earlier in the log entry “The Physicist’s Repertoire”.)

Now, discovering why analogies play a role in scientific discovery is an attempt at scientific discovery itself.  So it should follow a scientific discovery cycle, which I’ve formulated as the process of question → ideation → articulation → evaluation → verification.  We can trace this flow in Boden’s discussion, as well as in a resource she cites in her book, essayist Arthur Koestler’s The Act of Creation, which also tackles questions about human creativity.

In the spirit of a recipe, or what Boden might prefer to call a “conceptual space…a structured [style] of thought…that is familiar to (and valued by) a certain group” (p.4), I will use the discovery cycle as an outline to frame my own discussion.  [Physicists respond to the idea of recipes, hence the reason that one of the most heavily referenced books in physics computation is called Numerical Recipes.]

BEGIN DISCOVERY PROCESS…

Question

I’ve already stated the starting question, the frame that highlights the desire to know or do more with something, which ignites a discovery chase:  why do analogies play a role in scientific discovery?  Or the more hypothesis friendly: by what mechanisms does analogy play a role in creativity?  Here we assume that scientific discovery, at the level of an individual thinking up a new idea, represents a sub-type of creativity.  Boden further crafts this overarching question into two sub-questions to frame the main discussion of analogies in her book (pps. 186 – 198):  How are existing analogies evaluated for relevance?  How are new relevant analogies generated?

Ideation

Ideation is about coming up with answers to the questions posed in the first phase of scientific discovery.  For the question of “How are existing analogies evaluated for relevance?” the overall idea presented by Boden as an answer (not necessarily her idea per se, but more a synthesis of the existing research) is that the mind contains a storehouse of possible analogs against which it can compare the present example, and it determines (or selects) a good match based on a set of criteria to be considered (called constraints).  The exact nature of this set of criteria, and the relative importance given to a certain criterion in the evaluation process remains an open question, but three areas are cited: structural match (correspondence between elements or relations), semantic match (similarity between meaning), and “pragmatic centrality” (likelihood that the match is important to the originator of the analogy).  For the second question of “How are relevant new analogies generated?” the overall idea elicited by Boden is that the mind contains a base set of knowledge and a set of descriptive identifiers which classify that knowledge.  When given a source item, the mind tries to generate a target item, drawing from its knowledge base and relying on its descriptive identifiers to tell it which features of the source item are the important ones to be re-created in the new target item.

Articulation

Boden cites studies of computational algorithms designed to either identify the best analog for a target item from among a set of pre-loaded items, or to generate an analog to a target item based on a set of creation rules.  These analogical mapping programs (ACME, ARCS, COPYCAT, SME) represent the articulation of the ideas from the previous phase of the scientific discovery cycle.  They translate an internalized mental conception into externalized physical artifacts, with well-defined content and relations that can be tested.  They are, of course, highly idealized and very simplified, but that’s what makes the clearest science: the ability to tinker with just one feature and see how the world responds, in order to better understand that feature’s role in “how things work.”

Evaluation

In the case of a bit of computer code, evaluating the overall utility of the initial idea is easy enough.  Run the code, interpret the output, i.e., asses the analogy returned and see whether or not it matches what a human being would have provided as the answer.  The more times it does, the more it suggests the processes coded may represent actual processes in the mind.  Some of the codes cited above do match human outputs, so it seems there is something useful to the ideas behind analogies that they encode.

Verification

Which brings us to the last step in the scientific discovery cycle.  I will take an associative mental leap at this point and jump to a discussion of Koestler’s work since, as an exemplar, it fits better into the phase of verification.  Also, much of Boden’s discussion around analogies, throughout her book, is driven by a passage in Koestler’s book (which somewhat echoes physicist Richard Feynman’s comments on discovery, previously covered in “Echoes of History”):

“Thus the real achievement in [scientific] discoveries…is ‘seeing an analogy where no one saw one before’.  The scientist who sets out to solve a problem…in the jargon of the present theory…experiments with various matrices, hoping that one will fit.  If it is a routine problem of a familiar type, he will soon discover some aspect of it which is similar in some respect to other problems encountered in the past, and thus allows him to come to grips with it…But in original discoveries, no single pre-fabricated matrix is adequate to bridge the gap…Here the only salvation lies in hitting on an auxiliary matrix in a previously unrelated field…”

[A. Koestler, The Act of Creation, p. 201]

Koestler defends his conclusion through a study of the role of discovering hidden analogies in two scientific discoveries: Benjamin Franklin’s discovery of lightning conducting rods and Nobel Prize winner Otto Loewi’s discovery of chemical transmission of nerve impulses.

In Franklin’s case, Koestler traces the final insight, or “Eureka moment” as Koestler prefers to call it, to Franklin’s recognizing an analogy between directing a pointed object toward a storm cloud to increase the likelihood of conduction and floating on his back as a boy with a kite tied to his toe being pulled along by the wind.  As Boden suggests, pragmatic centrality is key to Franklin’s analogy playing a role–he only considers a kite a valued analogy to a rod getting closer to a thundercloud because of his relationship with kites as a child as a way to get closer to the wind.

In Loewi’s case Koestler traces the discovery to a hidden analogy between recognizing that medications sometimes had the same effect on organs as observations of stimulation by electric impulse, and yet the drug case relied on a ‘soup theory’ (the correct analogy of a mechanism diffusing in a liquid), whereas, the impulse model relied on a ‘spark theory’ (the incorrect analogy of electricity jumping a gap or being conducted along a wire).  Here it’s a combination of pragmatic centrality-recognizing the importance of medication effects- and the weighting of value criteria that help select the preferred analogy.

“Verification” may be in the eye of the beholder here, but nonetheless Koestler’s approach to seeking case studies shows the idea behind verification, to take your articulated ideas into the real world and see if they hold up.

…END DISCOVERY PROCESS.

Like all of the readings that appear in these logs, there’s a lot to process and it can be difficult to translate it into your own habits and work.  Over the next few months it will be a key goal here at The Insightful Scientist to help shoulder some of that processing burden by trying to distill this wealth of research into everyday actions instead of leaving them as one-time theories.  But as for the logs, I’ve found it helps to come up with short “one-liners” that capture the heart of what I should be trying in my own work.  These one-liners are what appear as the titles to many log entries.  If I remember nothing else of what I read (or wrote!) then at least I always carry with me those titular reminders to “Feed the White Wolf”, that “What You Fire is What You Forge”, that “Representation (Not Rightness) Rules”, and so on.

For this week, I was struck by the analogy Koestler uses to summarize his thinking that the basis of discovery is finding new analogies:

“The essence of discovery is that unlikely marriage of cabbages and kings [a reference to a Lewis Carroll poem]—of previously unrelated frames of reference or universes of discourse—whose union will solve the previously unsoluble problem.  The search for the improbable partner involves long and arduous striving—but the ultimate matchmaker is the unconscious…the greater fluency and freedom of unconscious ideation; its ‘intellectual libertinage’…[its] indifference towards logical niceties and mental prejudices consecrated by tradition; its non-verbal, ‘visionary’ powers.”

[A. Koestler, The Act of Creation, p. 201]

Then my job as a discoverer is to seek the improbable partner, the previously unconnected and seemingly unrelated universes, whose union will make a more expansive whole.  Who knows: maybe pineapples and pools, the theme of this week’s log entry image, is a visual union that contains within it an intellectual union worthy of discovery.

The Marshmallow Maneuver

The Marshmallow Maneuver

Marshmallows are exquisite probes of the human psyche.

So, here’s a question: what relationship do marshmallows, tape, string, scientific discovery, and uncooked spaghetti all have in common?  (And in case you’re wondering, this week’s feature photo is a bundle of uncooked spaghetti photographed from above.)

The answer to the question above comes from answering another question as posed in the book by journalist Warren Berger A More Beautiful Question: The Power of Inquiry to Spark Breakthrough Ideas.  I came across Berger’s book while doing preliminary research for formulating my discovery cycle (which I’ll log entries on for each phase of the cycle I’m using in early 2019).  It was a welcome reference entry for the first phase of the cycle, asking questions.  Questions are what ignite the process of scientific discovery because they express and focus the desire to know more about something, inspiring us to act.  It turns out that asking questions about how to ask questions was trickier to find information on that I thought it would be.  It’s not something we spend much conscious time or effort on.  We worry more about answering questions well rather than asking questions well.

So back to Berger’s provocative question, which was the following: “How do you build a tower that doesn’t collapse (even after you put the marshmallow on top)?” (p.120)

It turns out that this is what has been asked of a number of groups in various studies and posed as an exercise in design innovation workshops the world over.  In the usual form, participants are asked to build the tallest free-standing structure they can, in an allotted time, using just pasta, tape, and string and with one marshmallow placed on top.  Interestingly enough, among various groups of participants, two stand out in comparison: kindergarteners outperform graduate MBA students on this task.  Part of the reason lies in psychology.

There is a long tradition of marshmallow tests, kindergarteners, and psychology.  The most famous example in popular culture is a study that used marshmallows (among other sweet treats) to investigate willpower in kindergartners and its correlations with later life outcomes.  In that study, kids were given the option to get one marshmallow now or wait for a bit and, in return, get two later.  It appeared that children’s choices between instant gratification (give me one now) and delayed gratification (I’ll wait for two later) were linked to outcomes in adolescence.  Though the jury is still out on exactly how and with what outcomes this test correlates.

I had heard of this story (it’s often in the news), so when I came across marshmallows and kindergartners in Berger’s book, I assumed I already knew the punchline: if you are patient with a question and mull it over it will lead to more positive outcomes.  It turns out I was dead wrong.  When it comes to asking questions, patience is your friend.  But when it comes to answering questions, instant gratification seems to be the way to go.

Here’s what the marshmallow tower studies have found: groups that engage in many trials throughout the allotted time, building, failing, and trying again, on average end up with taller structures.  Kindergarteners jump right in to this approach, preferring a hands-on tactic and prototyping early and often to try and succeed.  In contrast, other groups, like MBA students, spend the majority of their allotted time discussing how they should approach and try to solve the problem.  This results in fewer actual attempts and on average shorter structures (or no successful structures at all!) as a result.

It seems then that Berger’s book not only discusses how and what kind of questions spark breakthroughs (which I’ll cover in a later log entry), but also how best to start trying to answer those questions: trial and error.  If you’ve read many of my log entries on the site, you’ll know favoring trial and error and failure is fast becoming a recurrent theme.  But it’s always good to have reminders.  This is part of the intent of the ARTEMIS virtual reality software being built: to give you a way to build mental models of what you are trying to discover fast and often.  And if you read much in the startup (like Eric Reis’ Lean Start Up), software (like Jeff Sutherland’s Scrum: The Art of Doing Twice the Work in Half the Time), or entrepreneurial arenas (like Jake Knapp’s Sprint: How to Solve Big Problems and Test New Ideas in Just 5 Days) then you will know that rapid prototyping to test out answers and learn by getting immediate feedback is all the rage right now.

So, trying out the marshmallow maneuver, with office supplies and uncooked food to build my own tower, may be the way to remind myself of the value of fearlessly trying out answers to big, weighty, scientific discovery questions.  A great scientific discoverer, Thomas Edison, inventor of the light bulb, once said in an interview with Harper’s Monthly Magazine (1890):

“I speak without exaggeration when I say that I have constructed three thousand different theories in connection with the electric light, each one of them reasonable and apparently to be true.  Yet only in two cases did my experiments prove the truth of my theory.”

(Thomas Edison, Harper’s Monthly Magazine, 1890)

He’s talking about theories, not experiments.  Three-thousand-theories.  As a theoretical particle physicist that really resonates by “quantifying” the “degree of try” it might take to even think up a good answer to a good question.  Besides, maybe if there’s a marshmallow at the end of every attempt, I’ll get better at generating my own 3,000 theories to find the 2 that work.  And if I’m smart, I’ll go after that marshmallow today and not wait until tomorrow.