Category: Scientific Discovery

Dancing with Discovery

Dancing with Discovery

Putting things into categories is helpful.  Sometimes it lets you recognize shared commonalities between things that you didn’t notice before.  Other times it gives you a mental shortcut to know how to interact with something—once you know its category, you’re more likely to know what it’s for and what to do with it.

In the first Insight Exchange I recently hosted at my home institution I structured the process of scientific discovery into a five-phase cycle and I also structured the types of scientific discovery into four categories.  The purpose of this “typology” of scientific discovery was to help guide the conversation in the group.  I also had two hunches: (1) that scientists pursuing similar types of discoveries, even if they are from different fields, will share similar challenges and setbacks; and (2) that each category of scientific discovery has a set of associated strategies uniquely suited to making progress on that kind of discovery.

This idea of discovery categories and associated strategies is a keystone of my goal to build software that helps promotes scientific discovery.  As I work on finalizing a first evolution of a territory map of scientific discovery and strategies (to be released under Spark Points sometime later this year) I keep mulling over the questions: What distinguishes the types of scientific discoveries? And what strategies are most useful for what types of scientific discoveries?

As always, I’m looking for ways to answer these questions that work across a broad range of fields, not just physics.  For the Insight Exchange I used a four-category breakdown of types of scientific discoveries: object, attribute, mechanism, technique.  Each of these are labelled by the primary type of knowledge being sought, as described in the little list below.

 

CATEGORIES OF DISCOVERY

  • OBJECT
    • new object
  • ATTRIBUTE
    • new property of a known object
  • MECHANISM
    • new behavior or phenomenon, or explanation of a known behavior or phenomenon
  • TECHNIQUE
    • new tool or method to generate a known object, attribute, or mechanism

 

For example, in my own field of neutrino physics open questions related to each category would be:

 

EXAMPLES OF CATEGORIES OF DISCOVERY IN NEUTRINO PHYSICS

  • OBJECT –
    • do additional neutrinos exist beyond the three known standard model (SM) ones?
  • ATTRIBUTE –
    • does the neutrino have a non-zero magnetic moment?
  • MECHANISM –
    • what is the origin of neutrino mass?
  • TECHNIQUE –
    • how can you develop a detector capable of observing beyond the standard model (BSM) physics using coherent elastic neutrino nucleus scattering (CEvNS)?

 

So, in this classification of scientific discovery, it’s a little like playing a professional version of the childhood question game “Animal, Vegetable, or Mineral?”.

The group in the first Insight Exchange did not seem to take issue with my category labels too much, but many people felt their personal scientific discovery goal did not cleanly fit into one category and listed it as belonging to multiple categories.  So as a workshop strategy, this typology didn’t work out too well (since I had planned to put people into small teams grouped by their category, with the thought that they might share more of the same challenges and, therefore, be better positioned to offer each other feedback).  Best laid plans.  Instead, I ended up assigning teams completely differently (in such a way as to ensure a good diversity of scientific fields and career stages within each team).

So, I’ve gone back to the drawing board a little to keep thinking about this idea of types of scientific discovery.  So far, I’ve been struggling to find the words to yield the right material when doing a literature review search:  Is it “categories” of scientific discovery?  Is it “types”? A “typology of scientific discovery”?  Or maybe “the classification of scientific discoveries”?

My search remains unsuccessful to some degree, but I did find two very short writings that attempt to do the same thing.  The first is an editorial in Science magazine by a former editor of that publication, Daniel E. Koshland Jr. a professor of biochemistry and molecular and cell biology, entitled “The Cha-Cha-Cha Theory of Scientific Discovery”.  Koshland’s theory is that historical patterns of scientific discovery (and non-scientific discoveries) suggest that discovery can be divided into three categories: charge, challenge, and chance.

In Koshland’s theory charge discoveries are about finding a solution to a well-known problem.  In the charge type of discovery, the discoverer’s primary role is to view the same data and context already well-known to all, but to come to some novel conclusion by perceiving that collection of facts in a way no other researcher has.  In the challenge type of discovery, the discoverer’s primary role is to bring cohesion and consistency to a body of well-known facts and/or anomalies that are in tension or lack a unifying conceptual framework.  Lastly, in the chance type of discovery, the discoverer’s role is to perceive and explain the central importance of a known or recently observed fact obtained by accident.  In his editorial, Koshland gives numerous examples of each type of discovery from fields as diverse as chemistry, physics, and biology.

More importantly, he extends his category theory to note an additional pattern:

“…the original contribution of the discoverer can be applied at different points in the solution of a problem.  In the Charge category, originality lies in the devising of a solution, not in the perception of the problem.  In the Challenge category, the originality is in perceiving the anomalies and their importance and devising a new concept that explains them.  In the Chance category, the original contribution is the perception of the importance of the accident and articulating the phenomenon on which it throws light.”

[D. E. Koshland, Science, vol. 137, p. 761 (2007)]

Before I do a little comparison of these different ways to categorize scientific discovery, let me also throw another article into the mix.  Keiichi Noe, a professor of philosophy at Tohoku University, wrote a contribution, entitled “The Structure of Scientific Discovery: From a Philosophical Point of View”, to a book on discovery science.  The actual focus of Noe’s paper is on the mental process by which discovery is achieved and how this might be translated into a computational algorithm.  But to elucidate such strategies, Noe first defines two types of discovery.

For Noe, one type of scientific discovery is “factual discovery”, the “discovery of a new fact guided by an established theory” (p.33).  In contrast, the second type of discovery is a “conceptual discovery”, “which [proposes] systematic explanations of…phenomena by reinterpreting pre-existing facts and laws from a new point of view” (p.33).  In Noe’s framework, the significance of these distinctions is that the scientist must bring a different kind of thought process, in particular a different implementation of the imagination, to the pursuit of each kind of discovery.  For factual discovery what Noe calls a “metonymical imagination” is needed; whereas, for conceptual discovery a “metaphorical imagination” is needed.

In the case of the discovery of new facts, the metonymical imagination refers to a way of thinking in which newly discovered items that are closely related, as seen through the lens of existing theory, are grouped together.  As Noe puts it these “discoveries…complete an unfinished theory” (p. 37).  In contrast, in the case of the discovery of essentially new theory, the metaphorical imagination refers to a way of thinking in which hidden or implied links are created between unrelated items that share common characteristics.  In these discoveries “[a change of] viewpoint from explicit facts to implicit unknown relations [occurs]” (p.37).

If we use the five-phase discovery cycle (question-ideation-articulation-evaluation-verification) as a common grounding point, then these three different typologies of scientific discovery—my quartet, Koshland’s trio, and Noe’s duo—each represent a different way of thinking about the discovery cycle.  For me, the discovery classifications emphasize the type of output desired by the discoverer at the end of the discovery cycle (i.e., after successful verification)—is it an object, a description, an explanation, or a method.  For Koshland the emphasis is on the point at which the discoverer must innovate within the discovery cycle in order to discover something new—either ideation (charge and challenge discoveries) or articulation (chance discoveries).  For Noe, the emphasis is on the overarching viewpoint and mindset that the discoverer applies in moving through the entire cycle—do you use the prevailing view or replace it.

It’s also easy to see that depending on which typology of scientific discovery you use, you will also perceive different strategies and techniques as more useful.  Within my typology, mathematical and logical strategies are better suited to mechanism discoveries, building and prototyping strategies to techniques, and a mix of both to object and attribute discoveries.  For Koshland, strategies that boost ideation or streamline articulation will simultaneously advance discovery.  Noe has explicitly defined useful tactics, the metonymical imagination for factual discovery and metaphorical imagination for conceptual discovery.

Which brings me to the end of my musings for this week.  Some weeks, coming up with an image that sums up my new perspective on scientific discovery is incredibly challenging.  But this week Koshland has made it easy for me:

If scientific discovery is a kind of dance, wherein the dancers become more skilled and graceful with time, producing ever more intricate choreographies of knowledge, then typologies of scientific discovery are merely styles of dance that one can practice.  For me it’s a kind of folksy American square dance or mannered English quadrille, for Koshland a vibrant Cuban cha-cha, and for Noe a delicate French pas de deux from ballet.  But whatever your style for dancing with discovery, knowing the kind of dance you’re in just might help you improve your moves.

Seek An Improbable Partner

Seek An Improbable Partner

In December I plan to post a series of log entries dedicated to the use of analogies.  These posts will be true “log” items, in the sense that they will journal my progress as I try to create a “recipe” for applying analogical thinking in a research physics context, in order to generate insight and foster scientific discovery.

In the mean time, I picked up a copy of psychologist Margaret Boden’s book, The Creative Mind:  Myths and Mechanisms, for this week’s reading, thinking that I would be covering a separate intellectual arena.  But it turns out there was an intriguing section on analogies!  Needless to say, I wasn’t about to set it aside for a month and a half.  So, unofficially, this has become the first log entry in the analogy series.  If you’ve been following the Physicist’s Log and Boden’s name sounds familiar, that’s because it is; I used her in my earlier log entry discussing the definition of scientific discovery “In the Name of Discovery”.

Boden’s book explores the mechanisms, from a cognitive psychology standpoint, that underpin our ability to think new thoughts.  She draws on a wide range of research, most prominently computational psychology (mimicking processes in the mind using computer algorithms).  The advent of computational psychology and its studies into creativity, problem solving, and discovery are fortuitous because a recipe for practice and a computational algorithm are much the same.  Both provide (1) content necessary to obtain a given outcome and (2) a sequence of implementation guiding the use of that content to obtain the outcome.  (This attention to both content and action, I mentioned earlier in the log entry “The Physicist’s Repertoire”.)

Now, discovering why analogies play a role in scientific discovery is an attempt at scientific discovery itself.  So it should follow a scientific discovery cycle, which I’ve formulated as the process of question → ideation → articulation → evaluation → verification.  We can trace this flow in Boden’s discussion, as well as in a resource she cites in her book, essayist Arthur Koestler’s The Act of Creation, which also tackles questions about human creativity.

In the spirit of a recipe, or what Boden might prefer to call a “conceptual space…a structured [style] of thought…that is familiar to (and valued by) a certain group” (p.4), I will use the discovery cycle as an outline to frame my own discussion.  [Physicists respond to the idea of recipes, hence the reason that one of the most heavily referenced books in physics computation is called Numerical Recipes.]

BEGIN DISCOVERY PROCESS…

Question

I’ve already stated the starting question, the frame that highlights the desire to know or do more with something, which ignites a discovery chase:  why do analogies play a role in scientific discovery?  Or the more hypothesis friendly: by what mechanisms does analogy play a role in creativity?  Here we assume that scientific discovery, at the level of an individual thinking up a new idea, represents a sub-type of creativity.  Boden further crafts this overarching question into two sub-questions to frame the main discussion of analogies in her book (pps. 186 – 198):  How are existing analogies evaluated for relevance?  How are new relevant analogies generated?

Ideation

Ideation is about coming up with answers to the questions posed in the first phase of scientific discovery.  For the question of “How are existing analogies evaluated for relevance?” the overall idea presented by Boden as an answer (not necessarily her idea per se, but more a synthesis of the existing research) is that the mind contains a storehouse of possible analogs against which it can compare the present example, and it determines (or selects) a good match based on a set of criteria to be considered (called constraints).  The exact nature of this set of criteria, and the relative importance given to a certain criterion in the evaluation process remains an open question, but three areas are cited: structural match (correspondence between elements or relations), semantic match (similarity between meaning), and “pragmatic centrality” (likelihood that the match is important to the originator of the analogy).  For the second question of “How are relevant new analogies generated?” the overall idea elicited by Boden is that the mind contains a base set of knowledge and a set of descriptive identifiers which classify that knowledge.  When given a source item, the mind tries to generate a target item, drawing from its knowledge base and relying on its descriptive identifiers to tell it which features of the source item are the important ones to be re-created in the new target item.

Articulation

Boden cites studies of computational algorithms designed to either identify the best analog for a target item from among a set of pre-loaded items, or to generate an analog to a target item based on a set of creation rules.  These analogical mapping programs (ACME, ARCS, COPYCAT, SME) represent the articulation of the ideas from the previous phase of the scientific discovery cycle.  They translate an internalized mental conception into externalized physical artifacts, with well-defined content and relations that can be tested.  They are, of course, highly idealized and very simplified, but that’s what makes the clearest science: the ability to tinker with just one feature and see how the world responds, in order to better understand that feature’s role in “how things work.”

Evaluation

In the case of a bit of computer code, evaluating the overall utility of the initial idea is easy enough.  Run the code, interpret the output, i.e., asses the analogy returned and see whether or not it matches what a human being would have provided as the answer.  The more times it does, the more it suggests the processes coded may represent actual processes in the mind.  Some of the codes cited above do match human outputs, so it seems there is something useful to the ideas behind analogies that they encode.

Verification

Which brings us to the last step in the scientific discovery cycle.  I will take an associative mental leap at this point and jump to a discussion of Koestler’s work since, as an exemplar, it fits better into the phase of verification.  Also, much of Boden’s discussion around analogies, throughout her book, is driven by a passage in Koestler’s book (which somewhat echoes physicist Richard Feynman’s comments on discovery, previously covered in “Echoes of History”):

“Thus the real achievement in [scientific] discoveries…is ‘seeing an analogy where no one saw one before’.  The scientist who sets out to solve a problem…in the jargon of the present theory…experiments with various matrices, hoping that one will fit.  If it is a routine problem of a familiar type, he will soon discover some aspect of it which is similar in some respect to other problems encountered in the past, and thus allows him to come to grips with it…But in original discoveries, no single pre-fabricated matrix is adequate to bridge the gap…Here the only salvation lies in hitting on an auxiliary matrix in a previously unrelated field…”

[A. Koestler, The Act of Creation, p. 201]

Koestler defends his conclusion through a study of the role of discovering hidden analogies in two scientific discoveries: Benjamin Franklin’s discovery of lightning conducting rods and Nobel Prize winner Otto Loewi’s discovery of chemical transmission of nerve impulses.

In Franklin’s case, Koestler traces the final insight, or “Eureka moment” as Koestler prefers to call it, to Franklin’s recognizing an analogy between directing a pointed object toward a storm cloud to increase the likelihood of conduction and floating on his back as a boy with a kite tied to his toe being pulled along by the wind.  As Boden suggests, pragmatic centrality is key to Franklin’s analogy playing a role–he only considers a kite a valued analogy to a rod getting closer to a thundercloud because of his relationship with kites as a child as a way to get closer to the wind.

In Loewi’s case Koestler traces the discovery to a hidden analogy between recognizing that medications sometimes had the same effect on organs as observations of stimulation by electric impulse, and yet the drug case relied on a ‘soup theory’ (the correct analogy of a mechanism diffusing in a liquid), whereas, the impulse model relied on a ‘spark theory’ (the incorrect analogy of electricity jumping a gap or being conducted along a wire).  Here it’s a combination of pragmatic centrality-recognizing the importance of medication effects- and the weighting of value criteria that help select the preferred analogy.

“Verification” may be in the eye of the beholder here, but nonetheless Koestler’s approach to seeking case studies shows the idea behind verification, to take your articulated ideas into the real world and see if they hold up.

…END DISCOVERY PROCESS.

Like all of the readings that appear in these logs, there’s a lot to process and it can be difficult to translate it into your own habits and work.  Over the next few months it will be a key goal here at The Insightful Scientist to help shoulder some of that processing burden by trying to distill this wealth of research into everyday actions instead of leaving them as one-time theories.  But as for the logs, I’ve found it helps to come up with short “one-liners” that capture the heart of what I should be trying in my own work.  These one-liners are what appear as the titles to many log entries.  If I remember nothing else of what I read (or wrote!) then at least I always carry with me those titular reminders to “Feed the White Wolf”, that “What You Fire is What You Forge”, that “Representation (Not Rightness) Rules”, and so on.

For this week, I was struck by the analogy Koestler uses to summarize his thinking that the basis of discovery is finding new analogies:

“The essence of discovery is that unlikely marriage of cabbages and kings [a reference to a Lewis Carroll poem]—of previously unrelated frames of reference or universes of discourse—whose union will solve the previously unsoluble problem.  The search for the improbable partner involves long and arduous striving—but the ultimate matchmaker is the unconscious…the greater fluency and freedom of unconscious ideation; its ‘intellectual libertinage’…[its] indifference towards logical niceties and mental prejudices consecrated by tradition; its non-verbal, ‘visionary’ powers.”

[A. Koestler, The Act of Creation, p. 201]

Then my job as a discoverer is to seek the improbable partner, the previously unconnected and seemingly unrelated universes, whose union will make a more expansive whole.  Who knows: maybe pineapples and pools, the theme of this week’s log entry image, is a visual union that contains within it an intellectual union worthy of discovery.

Real Particles in a Ghost Universe

Real Particles in a Ghost Universe

I love imagination.  Ergo, I love thought experiments.

I was also inspired to become a physicist by Einstein.  Einstein was famous for his thought experiments.  Ergo, I love thought experiments.

My most recent thought experiment has been to try and flip on its head a way neutrino physicists have of making neutrinos sound as mystifying as they are to us as an object of study.  We call them “ghost particles” inundating our universe.  In fact, I once read an interesting estimate that in the one hour it would take you to read a popular science pamphlet on neutrinos, “about 100,000,000,000,000,000,000 [one hundred million billion] neutrinos [will have] zipped through your body unannounced” (pamphlet p.23).  Which does seem ghost like…and a little creepy.

But I wanted to break free of seeing things according to the usual narrative.  And it occurred to me that we don’t think of particles as ghosts, we think of mist-like people as ghosts.  At which point a thought experiment occurred to me:  supposing I tried to re-vision this weakly interacting state of neutrinos giving primacy to their perspective over mine?  In this case, we would be the ghosts, the mist-like people forms that could be passed right through, if I envision myself as a little neutrino being born in and careening out of a ghost-like sun and whizzing through a ghost-like earth.  Then neutrinos would appear to be “real particles” in a “ghost universe”.  Since neutrinos in the universe outnumber the number of known humans, the neutrino ghost universe is the majority view, compared to our ghost particle minority view.  Which begged the question, could this oddly anthropomorphic thought experiment help inspire me to discover something new?  Or was it the worst thought experiment ever?

Much to my delight, at around the time this idle thought popped into my head I was reading a paper about how Einstein made discoveries by historian of science J. D. Norton.  I wandered to Norton’s webpage and perused his articles when this title jumped right out at me: “The Worst Thought Experiment” in The Routledge Companion to Thought Experiments (there’s a book for that?!).  In it Norton dissects and demolishes a thought experiment by physicist Leo Szilard regarding “Maxwell’s demon” (itself a thought experiment that suggest the second of law of thermodynamics can be violated, i.e., that the entropy of an isolated system can decrease over time) and evolving to include information theory, which became the basis for a line of discussion and thought experiments continuing to today and with articles appearing in Nature and Scientific American.

Whether I agree or not with the particular physics assessment of this particular thought experiment is not important here.  What is of real value is Norton’s canny synthesis of a criteria by which good versus bad thought experiments can be distinguished:

GOOD THOUGHT EXPERIMENT

  1. Thought experiment examines a specific behavior that illustrates a more general behavior.
  2. Thought experiment idealizes away irrelevant behaviors to highlight relevant behaviors.

BAD THOUGHT EXPERIMENT

  1. Thought experiment examines a common behavior that misrepresents general behavior.
  2. Thought experiment idealizes away relevant behaviors as irrelevant behaviors.

Here at “The Insightful Scientist” my purpose is always to emphasize practice over philosophy.  So if good thought experiments can help to foster scientific discovery, as Einstein’s case of riding on light beams suggests, and even bad thought experiments can foster scientific activity, as Norton’s Szilard case suggests, then there’s a practice technique for thought experiments embedded in Norton’s criteria.

The technique might look something like this: step 1) come up with what you think is a good thought experiment in your area and do the calculations; step 2) come up with the correlated bad thought experiments, e.g., willfully make nuisance parameters central, use easy extreme cases that are atypical, etc. and do the calculations; step 3) see if your good thought experiment holds up in the face of your bad thought experiment.

To go one layer further, I suspect that the conceptual metaphor embodied mathematics idea I’ve previously mentioned in “The Re-Education of an Educated Mind” is partly at play in deciding good from bad thought experiments.  One key to the use of conceptual metaphor in mathematics is to import, in their entirety, the inferences embedded in the metaphorical source topic and apply them, consistently, to the target topic.  For example, in the Lakoff and Nunez book discussing conceptual metaphors and mathematics they define four grounding metaphors which form the basis of mathematicising sensory-motor experience into mathematics.  One of these is the “Arithmetic Is Object Collection” metaphor (p. 55), which goes like this:

Arithmetic is Object Collection Metaphor by G. Lakoff and R. Nunez

Source Domain (Object Collection)  → Target Domain (Arithmetic)

Collection of objects of the same sizeNumbers

The size of the collectionThe size of the number

BiggerGreater

SmallerLess

The smallest collectionThe unit (one)

Putting collections togetherAddition

Taking a smaller collection from a larger collectionSubtraction

If one takes their source topic as a kind of physics, like thermal fluctuations in Szilard’s case, and the target as the content of our thought experiment, which was the one-molecule gas machine for Szilard, then Norton’s discussion of Szilard seems to highlight that Szilard did not port over key inferences in a consistent way; hence Szilard’s thought experiment may be a bad one.

Another element of a good thought experiment worth adding to Norton’s focus is that it should not just attempt to visualize an experiment.  It should visualize an experiment where the outcome seems to violate a fundamental assumption or belief.  In other words, thought experiments are about testing conundrums, not just daydreaming.  A more illustrative example of a conundrum and a good thought experiment is the trolley problem from the philosophy of ethics, which tests the idea that “the good of the many outweighs the good of the few” by asking you to make a moral choice between two immoral options.

So back to my little neutrino thought experiment.  It’s easy to see now that it’s actually insufficiently formulated to be a thought experiment at all—there’s no idealized specific behavior as part of the thought.  Even more importantly I also have no concept of “how ghosts behave” to act as an inferential set to import into my thought.  It turns out, what I cited at the beginning of this log entry was just a good old-fashioned visual metaphor and not a thought experiment at all—so it’s an epic fail!  But as I said in “What You Fire Is What You Forge”, failure is key to improvement.  By trying to nail down a more systematic way to evaluate thought experiments I’ve stumbled upon some ideas for how to reformulate bad thought experiments into good ones by pinpointing what makes them bad and improving it.

So, the world of our little mercurial neutrinos, able to spontaneously change their identity at will, is indeed an interesting breeding ground for the pursuit of discovery.  And an epic failure of understanding proved to be a spark point for greater insight.  A team of researcher’s at the University of Chicago may be right: one of the greatest gifts we might be able to give ourselves is a repository of failed attempts.  It turns out that the worst thought experiment may prove to be our best starting point for meaningful insight.

What You Fire Is What You Forge

What You Fire Is What You Forge

I have been letting ideas about practice, how to gain skill, how to gain mastery, and how to gain expertise percolate for some time now.  Because what’s the point of learning a new skill, like scientific discovery, if you can’t practice it until you can do it well consistently?  I’ve been very worried about this idea of practice.  Mainly because, although I know how to practice, I don’t know how to practice best.

When I was younger, especially as a student, I had time.  It was my job to practice.  Homework was built-in to my days.  Now that I’m a researcher, I don’t get allotted practice time.  I have to treat practice like a second, part-time, unpaid job.  That means every practice session needs to count.

I was feeling a little peeved and dispirited about this lack of time, which got me to thinking about adults who do have dedicated practice as part of their paid day job: professional athletes and professional musicians.  Initially, this led to binge watching documentaries on professional American football teams, just oozing jealousy as I watched them practice, practice, practice, both on and off season.

It turns out turning to sports proved to be a useful trigger for finding a new resource.  Whenever I get in a funk about discovery, my go-to solution is to read a nonfiction book.  They are easier to read than peer-reviewed journal articles, but more substantive than magazine and news articles.  The best non-fiction books give you references to research papers, so you can explore further once you’ve gotten an overview.  In other words, popular nonfiction books are a good place to start when you’re a beginner or have little time or energy.  It’s best to save the research articles for when you are more focused and more knowledgeable.

So, hot on the heels of envying American NFL football players with practice as part of their job description, I found a nonfiction book to soothe my temper, written by sports journalist Daniel Coyle and called “The Talent Code: Greatness Isn’t Born, It’s Grown”.  Primarily through an investigation of exceptionally skilled professional athletes, and in conversation with neurologists, Coyle devises the following hypotheses:

“(1) Every human movement, thought, or feeling is a precisely timed electric signal travelling through a chain of neurons [in the brain] —a circuit of nerve fibers.  (2) Myelin is the insulation that wraps these nerve fibers and increases signal strength, speed, and accuracy.  (3) The more we fire a particular circuit, the more myelin optimizes that circuit, and the stronger, faster, and more fluent our movements and thoughts become.”

(D. Coyle, The Talent Code, p. 32)

The myelin role works like this: when you first acquire a neural network the parts that carry signals are “bare”, like an exposed electrical wire that leaks signal.  As you use this network to execute the skill and fail, your brain recognizes the system needs improvement to meet the demands placed on it.  One of those improvements is wrapping a substance called myelin around the nerve fibers, like insulating an electrical wire.  The more times this occurs, the stronger and faster you can fire that skill.  Practice enough in this deep, thoughtful, responsive state, working in the zone where you make mistakes and correct them, and you can become “talented.”  As Coyle puts it, “Struggle is not an option: it’s a biological imperative” (p. 34).

In case you’re wondering about age:

“[We] continue to experience a net gain of myelin until around the age of fifty, when the balance tips toward loss.  [But we] retain the ability to myelinate throughout life—thankfully, 5 percent of [the needed cells] remain immature, always ready to answer the call.”

(D. Coyle, The Talent Code, p.45).

But the real beauty is that:

“Myelin is universal.  One size fits all skills…Myelin is meritocractic: circuits that fire get insulated…To put it another way, myelin doesn’t care who you are—it cares what you do.”

(D. Coyle, The Talent Code, p.44)

(Just another reason to “Feed the White Wolf” if you’re going spend time training your brain for discovery.)

This gave me an immediate possibility for how to make the most of my daily discovery practice sessions.  First, fire up those skills frequently.  Second, work at the edge of failure and correct those mistakes ASAP.  The idea of getting it wrong to get it right is contrary to my physics training.  Usually you get a lecture (or a seminar) and some training, read something, and then you hope to get the problems right–you don’t actively solve problems and rejoice at errors, let alone seek out opportunities to make errors.

But now I see it this way: at my daily practice sessions, where I take one discovery technique and apply it to a random physics question (at the moment, the questions I discuss with my physics students in classes), I need to push my ability to apply the technique until I make some mistake.  At that point the neurons are firing, the signal is sent that the system is failing, and by actively seeking out and correcting an error I am building a better brain for discovery.

So, my reading of Coyle’s book is that you want to handle your neurons this way to become “talented” at a skill:

  1. Fire them frequently (practice)
  2. Force them to fail (work at the edge of your skill level)
  3. Feed them feedback (error correct on the spot; don’t move on until you’ve got it right)
  4. Fire them again (do it right)

The more you fire your synapses at the edge of your discovery skill level, the more your brain will help you craft a better skill set with which to discover things.  It’s something iron smiths have known for millennia: what you fire is what you forge.

Echoes of History

Echoes of History

How do I model excellence?  I’ve been reading some ideas on how to model the performance of successful people and wanted to translate this into scientific discovery and physics.  If I want to model the physics discovery capabilities of one of the greats—Einstein, Newton, Noether, Fermi, Meitner, etc.—how exactly do I do that?  My natural response was to read biographies and historical analyses of how the discoverers made their discoveries.  My thinking was that by reading enough of these biographies I could distill contexts, triggers, events, habits that could be molded into modern practices.  But who to start with?  Maybe be biased by childhood serendipity and pick Newton or Einstein?  Or pick a neutrino pioneer like Pontecorvo or Pauli?  And then it occurred to me to just ask Google: “famous physicists’ ideas on discovery”.

It turns out Google AI came up with a pretty good answer, given by Feynman himself.

Richard P. Feynman is a well-known historical physicist who, in 1965, won the Nobel Prize in Physics among other things.  Feynman is a famous figure in the annals of scientific discovery, almost as famous as Einstein (though Einstein has figurines and action-figures while Feynman does not).  Like Einstein, Feynman was equally generous in communicating his insights and methods to his colleagues and the public at large.

In 1964, Feynman gave a series of seven public lectures at Cornell University, taped by the BBC and published as transcripts, “On the Character of Physical Law.”  The seventh lecture is titled “Seeking New Laws” which opens:

“What I want to talk about in this lecture is…what we think we know, what there is to guess, and how one goes about guessing.  Someone suggested that it would be ideal if, as I went along, I would slowly explain how to guess a law, and then end by creating a new law for you.  I do not know whether I shall be able to do that.”

(Feynman, “Seeking New Laws”, p. 1)

Upon reading this I felt as if the Google search bar had become a magic lamp and granted me my first wish: an interview with a physicist who knows how to discover things and where he gives advice about how to discover things!

Feynman suggests that to look for a new law, “First we guess it” (p. 156).  Then you calculate the result of the mathematical translation of the law and compare it to experiment.  Now guessing as adeptly as Feynman did, without guidance, is a bit intimidating, but luckily he goes on:

“Because I am a theoretical physicist…I want to now concentrate on how you make the guesses.  As I said before, it is not of any importance where the guess comes from; it is only important that it should agree with experiment, and that it should be as definite as possible…[One might think that] guessing is a dumb man’s job.  Actually it is quite the opposite, and I will try to explain why.  The first problem is how to start.”

(Feynman, “Seeking New Laws”, p.160)

Two pages later Feynman offers some practical advice on how, precisely, to start:

“One way you might suggest is to look at history to see how the other guys did it.  So we look at history.  We must start with Newton.”

(Feynman, “Seeking New Laws”, p.162)

Now at this point I’m delighted, Feynman thought of this historical digging idea too!  So the approach seems less frivolous now and I don’t feel so guilty for Googling; even Feynman might have tried it.  In fact, Feynman goes on to summarize his perception of the approaches used at five key turning points in physics history, which I’ll swiftly recap here (p. 162-163, P. 170):

  1. Newton—guess a deeper law by cobbling together mathematical ideas close to experimentally observed data
  2. Maxwell/Special Relativity—guess a deeper law by cobbling together mathematical ideas that other people have devised, see where they disagree, and invent whatever it takes to make them all agree
  3. Quantum Mechanics—guess the right equation and make it ruthlessly accountable to measurement
  4. Weak Particle Decays—guess the right equation and be willing to challenge the contradictory experimental evidence
  5. Einstein—guess a new principle and add it to the known ones

But now that we’ve mined history, Feynman goes on to paradoxically conclude that:

“I am sure that history does not repeat itself in physics…  The reason is this.  Any schemes – such as ‘think of symmetry laws’, or ‘put the information in mathematical form’, or ‘guess equations’ – are known to everybody now, and they are all tried all the time.  When you are stuck, the answer cannot be one of these, because you will have tried these right away.  There must be another way next time.  Each time we get into this log-jam of too much trouble, too many problems, it is because the methods that we are using are just like the ones we have used before.  The next scheme, the new discovery, is going to be made in a completely different way.  So history does not help us much.”

(Feynman, “Seeking New Laws”, p. 163-164)

At which point I’m plunged into annoyance: Feynman thinks the only way to discover something new, is to discover something new to make new discoveries with!

After Feynman throws out this chicken-and-egg type paradox about how to discover something new, his  exact ideas get tricky and paraphrasing fails to do justice to the direction the rest of his lecture takes (the original transcript is worth a read).  But from a practice point of view, my previous numbered list drives home Feynman’s echoing refrain: guess.  That leaves two questions (1) how to guess and (2) how to evaluate if a guess is any good, assuming you ignore the historical options in the list above as Feynman suggests will be necessary.

According to Feynman, “You can have as much junk in the guess as you like, provided that the consequences can be compared with experiment” (p.164).  He goes on to suggest that sometimes the guess will revolve around deciding to keep some assumptions and throw others out.  He also suggests that having multiple theories or representations for the same outcome can help:

“By putting the theory in a certain kind of framework you get an idea of what to change…[If you have two theories A and B,] although they are identical before they are changed, there are certain ways of changing one which looks natural which will not look natural in the other.“

(Feynman, “Seeking New Laws”, p.168).

He also seems to weigh in against ad hoc add-on guesses:

“For instance, Newton’s ideas about space and time agreed with experiment very well, but in order to get the correct motion of the orbit of mercury…the difference in the character of the theory needed was enormous [i.e., you needed Einstein’s general relativity].  The reason is that Newton’s laws were so simple and so perfect…In order to get something that would produce a slightly different result it had to be completely different.  In stating a new law you cannot make imperfections on a perfect thing; you have to have another perfect thing.”

(Feynman, “Seeking New Laws”, p.169).

So, no tweaking, fudging, or knob turning allowed.  Lastly, Feynman discusses how to evaluate if a scientific guess is good or bad:

“It is always easy when you have made a guess, and done two or three little calculations to make sure that it is not obviously wrong, to know that it is right.  When you get it right, it is obvious that it is right – at least if you have any experience – because usually what happens is that more comes out than goes in.”

(Feynman, “Seeking New Laws”, p.171).

So, Feynman’s take on the mental work that goes into discovery is that it is a persistent, strategic, guessing game.  The only way to succeed is to keep guessing until you get it right by learning something new:

“We must, and we should, and we always do, extend as far as we can beyond what we already know, beyond those ideas that we have already obtained.  Dangerous?  Yes.  Uncertain?  Yes.  But it is the only way to make science useful.  Science is only useful if it tells you about some experiment that has not been done; it is no good if it only tells you what just went on.  It is necessary to extend the ideas beyond where they have been tested.”

(Feynman, “Seeking New Laws”, p.164).

So that’s the pursuit of discovery as our friend Feynman sees it: if at first you don’t succeed, guess, then guess again.  It’s often hard when mining the echoes of history to know which conversations to shout forward and which to let fade out.  It’s also rare that I quote so much from one voice.  But as advice from one insightful scientist to future generations, Feynman’s reflections on the art of scientific discovery are still a conversation worth hearing.

At Discovery’s Edge

At Discovery’s Edge

The balancing act between theory and practice, qualitative insight and quantitative assessment, is a tough one.  In my quest to develop a repertoire of skills and practices targeted at scientific discovery, theory and qualitative insight have dominated the body of literature I’ve read so far.  Until I came across a magnificent pair of papers published by a group of sociologists and a theoretical biologist.  Their goal was to analyze a tension often discussed among scientists: stick with tradition or pursue innovation?

In these recent papers, the authors devise a living map of “what is known”, represented as a series of nodes and links between them, on a network graph.  They use biochemistry as their scientific use case; nodes represent molecules and links between nodes represent published connections between molecules.  They do this using a massive network mapping of molecules and connections appearing in abstracts of published articles in journals—around 6.5 million abstracts.  Ah, the glorious face of big data.

So, in this little microcosm of knowledge about discoveries in biochemistry, what can we learn about community-wide research strategies?

The first thing we learn is that there are techniques to map “what is known” and “how was it discovered” in a way that make them amenable to quantitative interrogation.  This is no small matter because in these two papers the authors pursue two fascinating questions: (1) what balance does a scientific community strike between pursuing tradition and pursuing innovation as the knowledge network grows; and (2) what can be done to maximize the exploration of such knowledge networks?

The answer to the first question is given in the longer of their two sociology papers (heavy reading for a poor physicist, but worth every ounce of effort).  As the knowledge network grows, research becomes more intensive and localized on already well-explored nodes and well-explored links, i.e., research favors tradition.  Innovation, exploring or seeking new nodes and links, is marginalized and receives less attention.  The authors connect this leaning in to tradition and leaning away from innovation to numerous factors, including some of the usual suspects like pressure to achieve high publication and citation rates for job security and job advancement.

In their second, shorter, paper they examine their newly quantified knowledge network from the perspective of maximizing discovery, defined as discovering new links and nodes in the network.  They find that when the knowledge network is young the approach of tradition, a localized search moving outward from central nodes (important molecules), is efficient.  But as the knowledge network grows this approach becomes more inefficient, even though this is the strategy that becomes more favored and represented in the published literature over time.

They suggest a number of policy remedies that would trickle down to individual discoverers by enacting change at the community level:

“Thus, science policy could improve the efficiency of discovery by subsidizing more risky strategies, incentivizing strategy diversity, and encouraging publication of failed experiments…Policymakers could design institutions that cultivate intelligent risk-taking by shifting evaluation from the individual to the group…[Policymakers] could also fund promising individuals rather than projects…Science and technology policy might also promote risky experiments with large potential benefits by lowering barriers to entry and championing radical ideas…”

[Rzhetsky et al., PNAS vol. 112, no. 47, p. 14573 (2015)]

As always though, I remain most concerned with how the individual can take action: how, with my own two hands and one mind, can I weave outward and affect change in the shape and size of the known web of knowledge, especially in my own field of neutrino physics?  If I combine what I’ve read in these fascinating sociology papers with my thoughts in “A Good Map is Hard to Find”, then I formulate an idea: my own two hands and lone mind can make one PowerPoint.

Now, I’ve been invited to attend a workshop to discuss possibilities for discovering new physics in a newly observed reaction called coherent elastic neutrino nuclear scattering, or CEvNS (i.e., a neutrino bounces off the nucleus in an atom as if it were one solid unit, instead of bouncing off of one proton or one neutron in the nucleus).  Workshops to produce agendas, devise long-term strategy, and draft roadmaps and white papers are ubiquitous in physics (and other sciences).  It’s how communities foster consensus on “what to do next.”

To me, an agenda-setting, roadmap-writing workshop seems like the perfect time to field test the idea of a “discovery call”: a voluntary, open-science call to action to trial scientific discovery strategies.  A “discovery call” is something you can talk about with colleagues, add to a website, or put on a PowerPoint slide.  The discovery call I’ll be pitching is as follows:  in physics, particles are analogous to molecules and particle interactions and mechanisms are analogous to connections between molecules.  Can we build a network map of published trends in our area of interest, CEvNS, and consider new strategies to maximize our network coverage with minimal experiments?  And can we take this a step further and build two other deeply analogous maps to use for comparison: one for neutrino neutral current interactions (i.e., where a neutrino bounces off another particle) and one for neutrino charged current interactions (where a neutrino bounces off of another particle, changing particle type in the process)?  It would be a way to provide a roadmap with a greater degree of informed choice about how, and how well, we’ve explored a given microcosm.

It seems to me that we have an opportunity to leverage our own history to help point our compass toward discovery, and to be able to see where untried paths have been neglected but might now be the roads best taken.  Perhaps today is the time to map what is known, with greater awareness and more practical purpose, so that tomorrow we can stand at discovery’s edge.

 

Interesting Stuff Related to This Post

 

  1. Jacob G. Foster, Andrea Rzhetsky, and James A. Evans, “Tradition and Innovation in Scientist’s Research Strategies”, American Sociological Review, volume 80, issue 5, pages 875-908 (October 1, 2015).
  2. Andrea Rzhetsky, Jacob G. Foster, Ian T. Foster, et al., “Choosing experiments to accelerate collective discovery,” Proceedings of the National Academy of Sciences of the United States of America (PNAS), volume 112, issue 47, pages 14569-14574 (November 24, 2015).

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “At Discovery’s Edge”, The Insightful Scientist Blog, September 21, 2018, https://insightfulscientist.com/blog/2018/at-discoverys-edge.

 

[Page feature photoA dewy spider’s web in Golcar, United Kingdom. Photo by michael podger on Unsplash.]

The Powerful Patroness

The Powerful Patroness

J. Hollingsworth’s article on institutional factors affecting scientific discovery and D. Coyle’s book discussing the role of coaching in the development of exceptional ability have me thinking about how connections with other people affect the discovery potential of the individual. In particular, they got me thinking about the Master-Apprentice model.

Physics Ph.D. training essentially follows a Master-Apprentice model with universities playing the role of guilds.  Each guild (institution) dominates in its local region and specializes in certain styles (physics specialties) as well as techniques.  The masters (staff researchers) have their own production agenda (research agenda) and apprentices (graduate students and undergraduates) join masters via recommendations and provided resources exist (places in the program and funding).  Apprentices perform more routine tasks, at the direction of the masters, that help prepare work for those at higher skill levels.  A journeyman, who started as an apprentice and has gained more skill and experience, undertakes intermediate tasks with less supervision, following the master’s agenda.  Eventually, with continued practice and experience, journeymen become masters.  Masters set the agenda and retain the most skilled work for themselves.  Whether we are talking about the training of artists by guilds in the 15th century or the training of physicists by universities in the 21st century, the Master-Apprentice model still exists.

Furthermore, when thinking  of Coyle, it highlights why the analogy came up. In one instance, this guild system produced a skilled hotbed for artistic invention—Renaissance art in Florence, Italy from the greats like  Da Vinci, Michelangelo, Verrocchio, Donatello and others.  Similarly, Hollingsworth’s paper suggests another discovery hotbed example, the Department of Biochemistry and Molecular Biology and the Department of Organismic and Evolutionary Biology at Harvard University in the 1950s, which produced an exceptional number of discoveries over a few decades.

I think this idea of institutional hotzones that produce famously skilled individuals will sound familiar to many.  I’ve been both on the outside of such zones and inside such zones over the course of my life.  Having been on the outside, I’m forced to ask myself, “what’s an outsider to do when you don’t have access to a discovery skilled zone?  When no master or mistress will accept you as an apprentice?”  Quite frankly, a large number of theories about how institutional skilled hotbeds arise and are sustained are informative, but the statistical truth is that most of us will either never have access to one or will not have consistent access to this option in our lifetime.  So, what’s a dedicated mind intent on discovery to do?

An alternative strategy that has worked for me is what I call the “powerful patroness” model, in contrast to the “Master-Apprentice” model.  I say patroness, as I’ve never had a patron in the sense I’m about to define, and I used “master” before because in my physics training, I’ve never had a “mistress” or a female supervisor.

A powerful patroness is an individual who involves herself in supporting the discovery capacity of another individual, even when the patroness and discoverer share no other common professional and discovery goals, by physically intervening on the discoverer’s behalf.  This intervention can be a conversation, a recommendation, advocacy, or funding, to name a few examples.

I have had at least five powerful patronesses in my time and, as a result of their contributions, I have been able to move back and forth among institutional discovery zones in the physics system, and, on occasion, been able to break inside from outside.  While it remains to be seen what impact this will have on my discovery track record in the long run, it is interesting to note that the most recent addition to my patroness pantheon is the late, great namesake for my current position as a Dame Kathleen Ollerenshaw Fellow.

Dame Kathleen Ollerenshaw was an astute mathematician and politician, who came of age in England during the World Wars.  She was instrumental in the founding of the Royal Northern College of Music in Manchester, devised an equation to solve the Rubik’s cube toy, and only recently passed away at the age of 101. All of which is only made more impressive by the fact that a viral infection left her deaf from the age of eight.  The fellowships supported by a trust and named in her honor are not strictly field or research agenda specific, but are competitively open to a broad array of researchers.  In a recent article on fostering discovery, it’s suggested that one way to support discovery is to support researchers, not research agendas, to allow for greater risk taking:

The sustained preference for conservative research, despite greatly expanded access… and the chance for greater rewards, suggests that institutional structures incentivize lower-risk research. For example, a young researcher pressured to publish frequently will favor incremental experiments more likely to be accepted by journals.

“If we want to push that risk, then we’ll have to change the recipe,” [James Evans, a study author] said. “We’ll have to reward at the group level, like Bell Labs did in its heyday, or fund individual investigators independent of the project, so they can intelligently allocate risk across their personal research portfolios.”

I am a proponent of multi-stream approaches, not just one mainstream approach, so I like the option of seeing all models at play – Master(Mistress)-Apprentice, powerful patron/ess, researchers and research agendas.  It seems that in pursuit of discovery, sometimes people are your greatest resource.

Representation (Not Rightness) Rules

Representation (Not Rightness) Rules

Which is a more correct representation of a beloved member of your life—an audio recording, a photograph, a video recording, a pencil sketch, a realist portrait painting, or an abstract painting?  That’s the question I keep asking myself every time I think about analogies, metaphors, and representations in physics.

The classic example of a representation challenge in physics is wave-particle duality:  do particles act like little billiard balls?  Or like waves moving through a non-existent medium?  The answer is they act like both.  The challenge is, as realities, they feel mutually exclusive.  But, as representations, the act as complements.  Each representation, either wave-like or particle-like, gives a framework for describing how a fundamental object, like a photon or an electron or a neutrino, will behave under certain circumstances.  Both representations are right in the sense that they will produce precise, numerical results that can be calculated and will match observed values.

In the same way, if I gave you a photograph of a close family member in my life to try and describe their behavior—how they interact with the world—you would gain one kind of understanding.  If, on the other hand, I gave you an audio recording of that same family member, the information would be complementary to what you learned from the photograph, but completely different.  Obviously though, we don’t cry foul and say, but how can the person be invisible voice waves and a static two-dimensional color object at the same time, and what does this have to do with their behavior?

That’s because we understand that they are representations of a thing and not the thing itself.  Of course, from an intellectual standpoint this argument is partly philosophical and psychological and has had volumes written about it.  But from a practitioner standpoint there’s no challenge: both representations are valid, and the combination gives a better understanding than either one representation alone.  In fact, in the close family member’s behavior analogy it’s easy to see that having more representations is better, because each added representation layers our perspective with additional understanding.

If I were trying to discover something new about someone else’s family member it might even help to force me to use different representations: an audio recording might tell me about how that person speaks or interacts with others, a photograph might show me that person’s physical characteristics and the kinds of events they participate in, an abstract painting might tell me what about that person most captures someone else’s perception.

In physics, having multiple representations of the same physical system can do the same thing, especially since most of our studies want to know about the behavior of something (its dynamics), but most representations are static (don’t move).  Words and math sit on a page.  Photographs sit frozen in a flat plane.  Videos sit in a flat plane and replay a sequence of still shots at high speed over and over.

At least with a living family member we can go meet them in person.  We can set aside the photograph.  We can ignore the voicemail.  We can turn off text and video messaging and go get all that experience in real-time, face-to-face.  Not so in physics.  The simulated photographs, the recordings, and the equations are as close as we will ever come to some members of nature’s family, especially in particle physics.  Biology, geology, and the social sciences, to name a few, have the advantage over particle physics, in that respect.  Though any investigations into the past are equally handicapped by lack of direct access.

So, it seems to me we need to accumulate as many representations and models as we can get our hands on.  Aim for a collage, not a pixel.  No one representation will ever be all things to all situations.  Because no representation will ever be the real thing.  By narrowing down representations to “the right picture” instead of generating representations to get “the right mix” we cut off a route to discovering something new.  After all, when we allowed both the wave and particle representations into physics we opened the door to countless previously inconceivable and undiscovered phenomena, like neutrino oscillations (the ability for a neutrino particle to spontaneously change particle type as it travels, which relies on quantum mechanical wave interference between its constituent parts).  When it comes to conceiving of the inconceivable, representation, not rightness, rules.

Base 10

Base 10

What does it mean to have a canon?  In English studies this is usually a body of texts that it’s assumed most serious scholars have read deeply and which somehow embody whatever characteristics or themes are deemed most relevant to a given perspective (e.g., a Western canon, a Shakespearean canon, a post-colonial literature canon, and so on).  In other words, the members of a canon act as pillars in the foundation of a shared body of knowledge.

In physics, we don’t really have a canon.  There are many famous historical papers and a few books and textbooks, but mandatory deep study and a shared list of “why these are canonical” (even if hotly debated) is not really in our culture.  There are perhaps, to a degree, canonical problems—physics problems everyone sees and attempts (recognizing that, ironically, who your “everyone” is will vary by sub-field).  These are most often presented in one of two groupings: by subject (mechanics, thermodynamics, astrophysics, etc.) or by math (differential equations, group theory, etc.).  Only ever so rarely are these problems grouped by core concept in any consistent way (perhaps Feynman’s three volume lecture series is the best example here).

My roving imagination and mind were hard at work again when I came across a piece about speed reading.  What captured my attention most was the emphasis to (1) first learn the technique, then (2) practice the technique for speed ignoring comprehension, then (3) practice the technique at speed with comprehension.  For a while, it has seemed to me that analogical thinking is a good test case for a discovery strategy applied to active, professional research.  But how to do that?

I have some ideas for how to synthesize a few operational analogical processes, which I’m hoping to work on with the help of master’s students this Fall semester.  But the speed reading piece reminded me that practice is key.  So how to practice?  Well, in English studies you practice critical thinking skills on the canon where you can compare your results with others, then you venture out into other non-canonical areas.  In physics our own canon is problems, so that means that to study and practice discovery strategies one will need a good discovery canon.  I’ve nicknamed the physics discovery canon I’m developing “Base 10.”

In my experience as an undergraduate student I always followed what I called “The Rule of 10”: practice a new math technique ten times before applying it to what you actually want to solve.  This was a necessary expedient since, by the time I started back in on my physics degree, it had been 5 or 6 years since I had studied the subject and I took the minimum number of courses (which meant little math) to get out of undergraduate and on to graduate school as quickly as possible (a money problem, not a time problem).

But of course, this rule of ten strategy also requires problems to practice on.  Hence, base 10 as a general rule for the number of test cases I need to try something out.  Now my natural inclination toward favoring analogical discovery strategies over others, combined with another math-inclined strategy known as “easy cases” (aka “toy models” where you keep the simple stuff and leave out the complicated details) has led me to believe that the standard groupings of physics problems may not be suited to my needs.  I need more conceptually useful categories right now, not categories that are mathematically similar or topic dependent.  It’s just a hunch, but worth an attempt.

So, I am slowly compiling my base 10 physics discovery canon to practice discovery strategies on.  The worst that happens is a little trial and error (technically, another discovery strategy which goes by the formal name of “generate and test”).  And if it doesn’t work out then, as I always tell my students, there’s a reason why it isn’t called “trial and success.”

The Idea Mill

The Idea Mill

It occasionally strikes me of just how many mythical notions I had about how researching discovery, fusing it with my own neutrino research, and putting it on “The Insightful Scientist” site would work.  Perhaps “pre-conceptions” or “ideas” would be a better word.  Which has me thinking about ideas.

I’m currently part of what is known as “the Hive” in my institution, a tribute to the symbol of the city we’re in—the Manchester bee.  But on the ground floor of our building is something known as the “Ideas Mill”.  It’s a place for lectures, break-out groups, conferences, students to study and lounge, etc.  And its name honors another Mancunian legacy–industrial mills.

When it comes to scientific discovery I think we too often have an image of the mind as a vessel that gets filled up.  If you are an “average person” then that liquid is just water—necessary, but uninspiring, and on really long days the vessel can get a bit leaky.  If you are one of the “gifted ones”, like good old Albert (Einstein of course), then the liquid is a bit more like fuel and your vessel an engine that runs like a honed machine churning out “something new” at an astonishing rate.

But watching my own ideas evolve, in writing and thinking about the meeting of the minds down in the Ideas Mill and expressions like “grist for the mill”, I think a more helpful picture might be to see the “discovery mind” as a mill.

Raw material is taken in (content, like knowledge and experience).  The materials are prepared for production in some way (distilling, sorting, accepting and rejecting content).  Then the materials are worked upon to create something new (refining, fusing, categorizing mental content).  And at last the final product is packaged for sharing and consumption (articulating mental content).

Each step, as the grist moves through the mill of the discovery mind, could become better known so that discovery-friendly tactics could be applied at key points throughout production.  This matches somewhat with aspects to the three-phase picture of discovery in parts of the philosophy of science.  My own view of the “discovery cycle” has already evolved to now include six phases that I think are suited to theoretical physics.  The main point is to perceive ideas and discovery as a process that can be built up and refined in the mind.  In this picture, how productive we are at discovery will depend on how much care we have taken with our internal mode of manufacture.

In any case, three phases, six phases, or other, it’s something to think about. In other words, grist for the mill.