Author: Bernadette K. Cogswell

Real Particles in a Ghost Universe

Real Particles in a Ghost Universe

I love imagination.  Ergo, I love thought experiments.

I was also inspired to become a physicist by Einstein.  Einstein was famous for his thought experiments.  Ergo, I love thought experiments.

My most recent thought experiment has been to try and flip on its head a way neutrino physicists have of making neutrinos sound as mystifying as they are to us as an object of study.  We call them “ghost particles” inundating our universe.  In fact, I once read an interesting estimate that in the one hour it would take you to read a popular science pamphlet on neutrinos, “about 100,000,000,000,000,000,000 [one hundred million billion] neutrinos [will have] zipped through your body unannounced” (pamphlet p.23).  Which does seem ghost like…and a little creepy.

But I wanted to break free of seeing things according to the usual narrative.  And it occurred to me that we don’t think of particles as ghosts, we think of mist-like people as ghosts.  At which point a thought experiment occurred to me:  supposing I tried to re-vision this weakly interacting state of neutrinos giving primacy to their perspective over mine?  In this case, we would be the ghosts, the mist-like people forms that could be passed right through, if I envision myself as a little neutrino being born in and careening out of a ghost-like sun and whizzing through a ghost-like earth.  Then neutrinos would appear to be “real particles” in a “ghost universe”.  Since neutrinos in the universe outnumber the number of known humans, the neutrino ghost universe is the majority view, compared to our ghost particle minority view.  Which begged the question, could this oddly anthropomorphic thought experiment help inspire me to discover something new?  Or was it the worst thought experiment ever?

Much to my delight, at around the time this idle thought popped into my head I was reading a paper about how Einstein made discoveries by historian of science J. D. Norton.  I wandered to Norton’s webpage and perused his articles when this title jumped right out at me: “The Worst Thought Experiment” in The Routledge Companion to Thought Experiments (there’s a book for that?!).  In it Norton dissects and demolishes a thought experiment by physicist Leo Szilard regarding “Maxwell’s demon” (itself a thought experiment that suggest the second of law of thermodynamics can be violated, i.e., that the entropy of an isolated system can decrease over time) and evolving to include information theory, which became the basis for a line of discussion and thought experiments continuing to today and with articles appearing in Nature and Scientific American.

Whether I agree or not with the particular physics assessment of this particular thought experiment is not important here.  What is of real value is Norton’s canny synthesis of a criteria by which good versus bad thought experiments can be distinguished:

GOOD THOUGHT EXPERIMENT

  1. Thought experiment examines a specific behavior that illustrates a more general behavior.
  2. Thought experiment idealizes away irrelevant behaviors to highlight relevant behaviors.

BAD THOUGHT EXPERIMENT

  1. Thought experiment examines a common behavior that misrepresents general behavior.
  2. Thought experiment idealizes away relevant behaviors as irrelevant behaviors.

Here at “The Insightful Scientist” my purpose is always to emphasize practice over philosophy.  So if good thought experiments can help to foster scientific discovery, as Einstein’s case of riding on light beams suggests, and even bad thought experiments can foster scientific activity, as Norton’s Szilard case suggests, then there’s a practice technique for thought experiments embedded in Norton’s criteria.

The technique might look something like this: step 1) come up with what you think is a good thought experiment in your area and do the calculations; step 2) come up with the correlated bad thought experiments, e.g., willfully make nuisance parameters central, use easy extreme cases that are atypical, etc. and do the calculations; step 3) see if your good thought experiment holds up in the face of your bad thought experiment.

To go one layer further, I suspect that the conceptual metaphor embodied mathematics idea I’ve previously mentioned in “The Re-Education of an Educated Mind” is partly at play in deciding good from bad thought experiments.  One key to the use of conceptual metaphor in mathematics is to import, in their entirety, the inferences embedded in the metaphorical source topic and apply them, consistently, to the target topic.  For example, in the Lakoff and Nunez book discussing conceptual metaphors and mathematics they define four grounding metaphors which form the basis of mathematicising sensory-motor experience into mathematics.  One of these is the “Arithmetic Is Object Collection” metaphor (p. 55), which goes like this:

Arithmetic is Object Collection Metaphor by G. Lakoff and R. Nunez

Source Domain (Object Collection)  → Target Domain (Arithmetic)

Collection of objects of the same sizeNumbers

The size of the collectionThe size of the number

BiggerGreater

SmallerLess

The smallest collectionThe unit (one)

Putting collections togetherAddition

Taking a smaller collection from a larger collectionSubtraction

If one takes their source topic as a kind of physics, like thermal fluctuations in Szilard’s case, and the target as the content of our thought experiment, which was the one-molecule gas machine for Szilard, then Norton’s discussion of Szilard seems to highlight that Szilard did not port over key inferences in a consistent way; hence Szilard’s thought experiment may be a bad one.

Another element of a good thought experiment worth adding to Norton’s focus is that it should not just attempt to visualize an experiment.  It should visualize an experiment where the outcome seems to violate a fundamental assumption or belief.  In other words, thought experiments are about testing conundrums, not just daydreaming.  A more illustrative example of a conundrum and a good thought experiment is the trolley problem from the philosophy of ethics, which tests the idea that “the good of the many outweighs the good of the few” by asking you to make a moral choice between two immoral options.

So back to my little neutrino thought experiment.  It’s easy to see now that it’s actually insufficiently formulated to be a thought experiment at all—there’s no idealized specific behavior as part of the thought.  Even more importantly I also have no concept of “how ghosts behave” to act as an inferential set to import into my thought.  It turns out, what I cited at the beginning of this log entry was just a good old-fashioned visual metaphor and not a thought experiment at all—so it’s an epic fail!  But as I said in “What You Fire Is What You Forge”, failure is key to improvement.  By trying to nail down a more systematic way to evaluate thought experiments I’ve stumbled upon some ideas for how to reformulate bad thought experiments into good ones by pinpointing what makes them bad and improving it.

So, the world of our little mercurial neutrinos, able to spontaneously change their identity at will, is indeed an interesting breeding ground for the pursuit of discovery.  And an epic failure of understanding proved to be a spark point for greater insight.  A team of researcher’s at the University of Chicago may be right: one of the greatest gifts we might be able to give ourselves is a repository of failed attempts.  It turns out that the worst thought experiment may prove to be our best starting point for meaningful insight.

What You Fire Is What You Forge

What You Fire Is What You Forge

I have been letting ideas about practice, how to gain skill, how to gain mastery, and how to gain expertise percolate for some time now.  Because what’s the point of learning a new skill, like scientific discovery, if you can’t practice it until you can do it well consistently?  I’ve been very worried about this idea of practice.  Mainly because, although I know how to practice, I don’t know how to practice best.

When I was younger, especially as a student, I had time.  It was my job to practice.  Homework was built-in to my days.  Now that I’m a researcher, I don’t get allotted practice time.  I have to treat practice like a second, part-time, unpaid job.  That means every practice session needs to count.

I was feeling a little peeved and dispirited about this lack of time, which got me to thinking about adults who do have dedicated practice as part of their paid day job: professional athletes and professional musicians.  Initially, this led to binge watching documentaries on professional American football teams, just oozing jealousy as I watched them practice, practice, practice, both on and off season.

It turns out turning to sports proved to be a useful trigger for finding a new resource.  Whenever I get in a funk about discovery, my go-to solution is to read a nonfiction book.  They are easier to read than peer-reviewed journal articles, but more substantive than magazine and news articles.  The best non-fiction books give you references to research papers, so you can explore further once you’ve gotten an overview.  In other words, popular nonfiction books are a good place to start when you’re a beginner or have little time or energy.  It’s best to save the research articles for when you are more focused and more knowledgeable.

So, hot on the heels of envying American NFL football players with practice as part of their job description, I found a nonfiction book to soothe my temper, written by sports journalist Daniel Coyle and called “The Talent Code: Greatness Isn’t Born, It’s Grown”.  Primarily through an investigation of exceptionally skilled professional athletes, and in conversation with neurologists, Coyle devises the following hypotheses:

“(1) Every human movement, thought, or feeling is a precisely timed electric signal travelling through a chain of neurons [in the brain] —a circuit of nerve fibers.  (2) Myelin is the insulation that wraps these nerve fibers and increases signal strength, speed, and accuracy.  (3) The more we fire a particular circuit, the more myelin optimizes that circuit, and the stronger, faster, and more fluent our movements and thoughts become.”

(D. Coyle, The Talent Code, p. 32)

The myelin role works like this: when you first acquire a neural network the parts that carry signals are “bare”, like an exposed electrical wire that leaks signal.  As you use this network to execute the skill and fail, your brain recognizes the system needs improvement to meet the demands placed on it.  One of those improvements is wrapping a substance called myelin around the nerve fibers, like insulating an electrical wire.  The more times this occurs, the stronger and faster you can fire that skill.  Practice enough in this deep, thoughtful, responsive state, working in the zone where you make mistakes and correct them, and you can become “talented.”  As Coyle puts it, “Struggle is not an option: it’s a biological imperative” (p. 34).

In case you’re wondering about age:

“[We] continue to experience a net gain of myelin until around the age of fifty, when the balance tips toward loss.  [But we] retain the ability to myelinate throughout life—thankfully, 5 percent of [the needed cells] remain immature, always ready to answer the call.”

(D. Coyle, The Talent Code, p.45).

But the real beauty is that:

“Myelin is universal.  One size fits all skills…Myelin is meritocractic: circuits that fire get insulated…To put it another way, myelin doesn’t care who you are—it cares what you do.”

(D. Coyle, The Talent Code, p.44)

(Just another reason to “Feed the White Wolf” if you’re going spend time training your brain for discovery.)

This gave me an immediate possibility for how to make the most of my daily discovery practice sessions.  First, fire up those skills frequently.  Second, work at the edge of failure and correct those mistakes ASAP.  The idea of getting it wrong to get it right is contrary to my physics training.  Usually you get a lecture (or a seminar) and some training, read something, and then you hope to get the problems right–you don’t actively solve problems and rejoice at errors, let alone seek out opportunities to make errors.

But now I see it this way: at my daily practice sessions, where I take one discovery technique and apply it to a random physics question (at the moment, the questions I discuss with my physics students in classes), I need to push my ability to apply the technique until I make some mistake.  At that point the neurons are firing, the signal is sent that the system is failing, and by actively seeking out and correcting an error I am building a better brain for discovery.

So, my reading of Coyle’s book is that you want to handle your neurons this way to become “talented” at a skill:

  1. Fire them frequently (practice)
  2. Force them to fail (work at the edge of your skill level)
  3. Feed them feedback (error correct on the spot; don’t move on until you’ve got it right)
  4. Fire them again (do it right)

The more you fire your synapses at the edge of your discovery skill level, the more your brain will help you craft a better skill set with which to discover things.  It’s something iron smiths have known for millennia: what you fire is what you forge.

Echoes of History

Echoes of History

How do I model excellence?  I’ve been reading some ideas on how to model the performance of successful people and wanted to translate this into scientific discovery and physics.  If I want to model the physics discovery capabilities of one of the greats—Einstein, Newton, Noether, Fermi, Meitner, etc.—how exactly do I do that?  My natural response was to read biographies and historical analyses of how the discoverers made their discoveries.  My thinking was that by reading enough of these biographies I could distill contexts, triggers, events, habits that could be molded into modern practices.  But who to start with?  Maybe be biased by childhood serendipity and pick Newton or Einstein?  Or pick a neutrino pioneer like Pontecorvo or Pauli?  And then it occurred to me to just ask Google: “famous physicists’ ideas on discovery”.

It turns out Google AI came up with a pretty good answer, given by Feynman himself.

Richard P. Feynman is a well-known historical physicist who, in 1965, won the Nobel Prize in Physics among other things.  Feynman is a famous figure in the annals of scientific discovery, almost as famous as Einstein (though Einstein has figurines and action-figures while Feynman does not).  Like Einstein, Feynman was equally generous in communicating his insights and methods to his colleagues and the public at large.

In 1964, Feynman gave a series of seven public lectures at Cornell University, taped by the BBC and published as transcripts, “On the Character of Physical Law.”  The seventh lecture is titled “Seeking New Laws” which opens:

“What I want to talk about in this lecture is…what we think we know, what there is to guess, and how one goes about guessing.  Someone suggested that it would be ideal if, as I went along, I would slowly explain how to guess a law, and then end by creating a new law for you.  I do not know whether I shall be able to do that.”

(Feynman, “Seeking New Laws”, p. 1)

Upon reading this I felt as if the Google search bar had become a magic lamp and granted me my first wish: an interview with a physicist who knows how to discover things and where he gives advice about how to discover things!

Feynman suggests that to look for a new law, “First we guess it” (p. 156).  Then you calculate the result of the mathematical translation of the law and compare it to experiment.  Now guessing as adeptly as Feynman did, without guidance, is a bit intimidating, but luckily he goes on:

“Because I am a theoretical physicist…I want to now concentrate on how you make the guesses.  As I said before, it is not of any importance where the guess comes from; it is only important that it should agree with experiment, and that it should be as definite as possible…[One might think that] guessing is a dumb man’s job.  Actually it is quite the opposite, and I will try to explain why.  The first problem is how to start.”

(Feynman, “Seeking New Laws”, p.160)

Two pages later Feynman offers some practical advice on how, precisely, to start:

“One way you might suggest is to look at history to see how the other guys did it.  So we look at history.  We must start with Newton.”

(Feynman, “Seeking New Laws”, p.162)

Now at this point I’m delighted, Feynman thought of this historical digging idea too!  So the approach seems less frivolous now and I don’t feel so guilty for Googling; even Feynman might have tried it.  In fact, Feynman goes on to summarize his perception of the approaches used at five key turning points in physics history, which I’ll swiftly recap here (p. 162-163, P. 170):

  1. Newton—guess a deeper law by cobbling together mathematical ideas close to experimentally observed data
  2. Maxwell/Special Relativity—guess a deeper law by cobbling together mathematical ideas that other people have devised, see where they disagree, and invent whatever it takes to make them all agree
  3. Quantum Mechanics—guess the right equation and make it ruthlessly accountable to measurement
  4. Weak Particle Decays—guess the right equation and be willing to challenge the contradictory experimental evidence
  5. Einstein—guess a new principle and add it to the known ones

But now that we’ve mined history, Feynman goes on to paradoxically conclude that:

“I am sure that history does not repeat itself in physics…  The reason is this.  Any schemes – such as ‘think of symmetry laws’, or ‘put the information in mathematical form’, or ‘guess equations’ – are known to everybody now, and they are all tried all the time.  When you are stuck, the answer cannot be one of these, because you will have tried these right away.  There must be another way next time.  Each time we get into this log-jam of too much trouble, too many problems, it is because the methods that we are using are just like the ones we have used before.  The next scheme, the new discovery, is going to be made in a completely different way.  So history does not help us much.”

(Feynman, “Seeking New Laws”, p. 163-164)

At which point I’m plunged into annoyance: Feynman thinks the only way to discover something new, is to discover something new to make new discoveries with!

After Feynman throws out this chicken-and-egg type paradox about how to discover something new, his  exact ideas get tricky and paraphrasing fails to do justice to the direction the rest of his lecture takes (the original transcript is worth a read).  But from a practice point of view, my previous numbered list drives home Feynman’s echoing refrain: guess.  That leaves two questions (1) how to guess and (2) how to evaluate if a guess is any good, assuming you ignore the historical options in the list above as Feynman suggests will be necessary.

According to Feynman, “You can have as much junk in the guess as you like, provided that the consequences can be compared with experiment” (p.164).  He goes on to suggest that sometimes the guess will revolve around deciding to keep some assumptions and throw others out.  He also suggests that having multiple theories or representations for the same outcome can help:

“By putting the theory in a certain kind of framework you get an idea of what to change…[If you have two theories A and B,] although they are identical before they are changed, there are certain ways of changing one which looks natural which will not look natural in the other.“

(Feynman, “Seeking New Laws”, p.168).

He also seems to weigh in against ad hoc add-on guesses:

“For instance, Newton’s ideas about space and time agreed with experiment very well, but in order to get the correct motion of the orbit of mercury…the difference in the character of the theory needed was enormous [i.e., you needed Einstein’s general relativity].  The reason is that Newton’s laws were so simple and so perfect…In order to get something that would produce a slightly different result it had to be completely different.  In stating a new law you cannot make imperfections on a perfect thing; you have to have another perfect thing.”

(Feynman, “Seeking New Laws”, p.169).

So, no tweaking, fudging, or knob turning allowed.  Lastly, Feynman discusses how to evaluate if a scientific guess is good or bad:

“It is always easy when you have made a guess, and done two or three little calculations to make sure that it is not obviously wrong, to know that it is right.  When you get it right, it is obvious that it is right – at least if you have any experience – because usually what happens is that more comes out than goes in.”

(Feynman, “Seeking New Laws”, p.171).

So, Feynman’s take on the mental work that goes into discovery is that it is a persistent, strategic, guessing game.  The only way to succeed is to keep guessing until you get it right by learning something new:

“We must, and we should, and we always do, extend as far as we can beyond what we already know, beyond those ideas that we have already obtained.  Dangerous?  Yes.  Uncertain?  Yes.  But it is the only way to make science useful.  Science is only useful if it tells you about some experiment that has not been done; it is no good if it only tells you what just went on.  It is necessary to extend the ideas beyond where they have been tested.”

(Feynman, “Seeking New Laws”, p.164).

So that’s the pursuit of discovery as our friend Feynman sees it: if at first you don’t succeed, guess, then guess again.  It’s often hard when mining the echoes of history to know which conversations to shout forward and which to let fade out.  It’s also rare that I quote so much from one voice.  But as advice from one insightful scientist to future generations, Feynman’s reflections on the art of scientific discovery are still a conversation worth hearing.

At Discovery’s Edge

At Discovery’s Edge

The balancing act between theory and practice, qualitative insight and quantitative assessment, is a tough one.  In my quest to develop a repertoire of skills and practices targeted at scientific discovery, theory and qualitative insight have dominated the body of literature I’ve read so far.  Until I came across a magnificent pair of papers published by a group of sociologists and a theoretical biologist.  Their goal was to analyze a tension often discussed among scientists: stick with tradition or pursue innovation?

In these recent papers, the authors devise a living map of “what is known”, represented as a series of nodes and links between them, on a network graph.  They use biochemistry as their scientific use case; nodes represent molecules and links between nodes represent published connections between molecules.  They do this using a massive network mapping of molecules and connections appearing in abstracts of published articles in journals—around 6.5 million abstracts.  Ah, the glorious face of big data.

So, in this little microcosm of knowledge about discoveries in biochemistry, what can we learn about community-wide research strategies?

The first thing we learn is that there are techniques to map “what is known” and “how was it discovered” in a way that make them amenable to quantitative interrogation.  This is no small matter because in these two papers the authors pursue two fascinating questions: (1) what balance does a scientific community strike between pursuing tradition and pursuing innovation as the knowledge network grows; and (2) what can be done to maximize the exploration of such knowledge networks?

The answer to the first question is given in the longer of their two sociology papers (heavy reading for a poor physicist, but worth every ounce of effort).  As the knowledge network grows, research becomes more intensive and localized on already well-explored nodes and well-explored links, i.e., research favors tradition.  Innovation, exploring or seeking new nodes and links, is marginalized and receives less attention.  The authors connect this leaning in to tradition and leaning away from innovation to numerous factors, including some of the usual suspects like pressure to achieve high publication and citation rates for job security and job advancement.

In their second, shorter, paper they examine their newly quantified knowledge network from the perspective of maximizing discovery, defined as discovering new links and nodes in the network.  They find that when the knowledge network is young the approach of tradition, a localized search moving outward from central nodes (important molecules), is efficient.  But as the knowledge network grows this approach becomes more inefficient, even though this is the strategy that becomes more favored and represented in the published literature over time.

They suggest a number of policy remedies that would trickle down to individual discoverers by enacting change at the community level:

“Thus, science policy could improve the efficiency of discovery by subsidizing more risky strategies, incentivizing strategy diversity, and encouraging publication of failed experiments…Policymakers could design institutions that cultivate intelligent risk-taking by shifting evaluation from the individual to the group…[Policymakers] could also fund promising individuals rather than projects…Science and technology policy might also promote risky experiments with large potential benefits by lowering barriers to entry and championing radical ideas…”

[Rzhetsky et al., PNAS vol. 112, no. 47, p. 14573 (2015)]

As always though, I remain most concerned with how the individual can take action: how, with my own two hands and one mind, can I weave outward and affect change in the shape and size of the known web of knowledge, especially in my own field of neutrino physics?  If I combine what I’ve read in these fascinating sociology papers with my thoughts in “A Good Map is Hard to Find”, then I formulate an idea: my own two hands and lone mind can make one PowerPoint.

Now, I’ve been invited to attend a workshop to discuss possibilities for discovering new physics in a newly observed reaction called coherent elastic neutrino nuclear scattering, or CEvNS (i.e., a neutrino bounces off the nucleus in an atom as if it were one solid unit, instead of bouncing off of one proton or one neutron in the nucleus).  Workshops to produce agendas, devise long-term strategy, and draft roadmaps and white papers are ubiquitous in physics (and other sciences).  It’s how communities foster consensus on “what to do next.”

To me, an agenda-setting, roadmap-writing workshop seems like the perfect time to field test the idea of a “discovery call”: a voluntary, open-science call to action to trial scientific discovery strategies.  A “discovery call” is something you can talk about with colleagues, add to a website, or put on a PowerPoint slide.  The discovery call I’ll be pitching is as follows:  in physics, particles are analogous to molecules and particle interactions and mechanisms are analogous to connections between molecules.  Can we build a network map of published trends in our area of interest, CEvNS, and consider new strategies to maximize our network coverage with minimal experiments?  And can we take this a step further and build two other deeply analogous maps to use for comparison: one for neutrino neutral current interactions (i.e., where a neutrino bounces off another particle) and one for neutrino charged current interactions (where a neutrino bounces off of another particle, changing particle type in the process)?  It would be a way to provide a roadmap with a greater degree of informed choice about how, and how well, we’ve explored a given microcosm.

It seems to me that we have an opportunity to leverage our own history to help point our compass toward discovery, and to be able to see where untried paths have been neglected but might now be the roads best taken.  Perhaps today is the time to map what is known, with greater awareness and more practical purpose, so that tomorrow we can stand at discovery’s edge.

 

Interesting Stuff Related to This Post

 

  1. Jacob G. Foster, Andrea Rzhetsky, and James A. Evans, “Tradition and Innovation in Scientist’s Research Strategies”, American Sociological Review, volume 80, issue 5, pages 875-908 (October 1, 2015).
  2. Andrea Rzhetsky, Jacob G. Foster, Ian T. Foster, et al., “Choosing experiments to accelerate collective discovery,” Proceedings of the National Academy of Sciences of the United States of America (PNAS), volume 112, issue 47, pages 14569-14574 (November 24, 2015).

 

Related Content on The Insightful Scientist

 

Blog Posts:

 

 

How to cite this post in a reference list:

 

Bernadette K. Cogswell, “At Discovery’s Edge”, The Insightful Scientist Blog, September 21, 2018, https://insightfulscientist.com/blog/2018/at-discoverys-edge.

 

[Page feature photoA dewy spider’s web in Golcar, United Kingdom. Photo by michael podger on Unsplash.]

The Re-Education of an Educated Mind

The Re-Education of an Educated Mind

I once told a fellow graduate student at a nuclear physics summer school that, “I don’t speak math.”  He found this very funny, and me very funny.  But I absolutely meant it.  In fact, I was angry about it.  By that time, I had already met the sleep-depriving scientific discovery question I’ve dreamed of answering for the last decade.  I had been trying to solve it.  It’s why I attended the nuclear physics summer school at all.  It was considered “outside my area”, since my Ph.D. advisor and I had agreed I would declare my concentration as particle physics.  I thought gaining more knowledge would help me make progress.  But then I discovered that I don’t speak math.  I read math.  I calculate math.  I derive math.  But I don’t speak math.

In my current conception of the scientific discovery cycle the flow goes like this: question → ideation → articulation → evaluation → verification, with constant feedback between phases, and the ability to reset to an earlier phase as needed.  At the time of the nuclear physics summer school, I had the question in mind and I’d come up with three possible ideas for answers.  But my efforts completely died at “articulation”, re-phrasing my mental conceptualization of each answer as mathematical equations, because I didn’t speak math.

What do I mean by “speak” math?  And how is this different from reading and calculating?

Put it in another context.  As someone who idly studied five languages besides my native English (no, I can’t speak them all now) and who has a parent who raised me as semi-bilingual and does her professional work in at least two languages, I’ve experienced the feeling of “reading without speaking” many times before.

“Read” means I can identify things on signs that I’ve memorized or seen before.  “Read” means that I can sometimes derive related things, like word signs for the women’s toilet in a restaurant versus the signs I saw at the airport.  “Read” means I can muddle through restaurant menus, especially if there are pictures.

“Speak”, on the other hand, means I can mention to a restaurant server that the ladies’ room is out of toilet paper.  “Speak” means I can make a special meal request that’s not on the menu at all.  “Speak” means I can compose a Physicist’s Log entry about scientific discovery, even when I’m not sure how to define it, how to describe it, or how to achieve it.

“Read” means recognition, “speak” means “creation”.  While I can read math just fine, I can’t create new mathematical expressions with meaning off the top of my head, the way I can churn out sentences in a log entry.  Because I “can’t speak math”, there’s a bottleneck in my discovery cycle, right at the phase of articulation.

I’ve spent years since that summer school digging around looking for practices to help relieve the bottleneck:  Do more math! (Funny how more reading doesn’t equal better speaking.)  Try Fermi questions! (Back of the envelope calculations to answer odd questions about everyday life; but mostly just add and multiply things.)  Just practice modeling!  (Writing down just the starting equation, given any kind of physics word problem.  But this assumes you already know the physics and just need to recognize it in the problem.  What happens when nobody knows the physics yet?)

It wasn’t until I started studying cognitive psychology and scientific discovery that I came across a new option in a book called Where Mathematics Come From:  How the Embodied Mind Brings Mathematics Into Being, written by George Lakoff and Rafael Nunez, a linguist and a psychologist team who study the mind and mathematics.  Their theory is simple: all mathematics comes from lived sensory-motor experience that we then translate into the domain of mathematics via conceptual metaphor.  ALL mathematics; addition, subtraction, the concept of numbers, imaginary numbers, algebra, trigonometry, and on and on.  The final case study they do of the famous Euler equation and all the conceptual metaphors it requires is fascinating.  Most interesting in their theory is the sense that mathematics is not just derived (recognized, manipulated, objectively discovered), but that it can also be contrived (built, constructed, subjectively created).

In Lakoff and Nunez’s scheme, one could learn to speak math.  One could learn to construct mathematical expressions in the same way we construct sentences by consciously, explicitly building math expressions based on careful selection and combination of the underlying embodied metaphors (and still strictly adhering to the operational ground rules of math).  That this is based on conceptual metaphor (closely aligned to analogy and, hence, scientific discovery), and that the metaphors are based on physical experience (suited to a physics focus on the natural world), was music to my ears.

So, I may not speak math yet.  What’s more, taking Lakoff and Nunez’s approach may require a little re-education when it comes to how I think about math.  But now I know speaking math is possible.  And in the pursuit of scientific discovery, the re-education of an educated mind is a small price to pay to keep the discovery cycle alive.

The Powerful Patroness

The Powerful Patroness

J. Hollingsworth’s article on institutional factors affecting scientific discovery and D. Coyle’s book discussing the role of coaching in the development of exceptional ability have me thinking about how connections with other people affect the discovery potential of the individual. In particular, they got me thinking about the Master-Apprentice model.

Physics Ph.D. training essentially follows a Master-Apprentice model with universities playing the role of guilds.  Each guild (institution) dominates in its local region and specializes in certain styles (physics specialties) as well as techniques.  The masters (staff researchers) have their own production agenda (research agenda) and apprentices (graduate students and undergraduates) join masters via recommendations and provided resources exist (places in the program and funding).  Apprentices perform more routine tasks, at the direction of the masters, that help prepare work for those at higher skill levels.  A journeyman, who started as an apprentice and has gained more skill and experience, undertakes intermediate tasks with less supervision, following the master’s agenda.  Eventually, with continued practice and experience, journeymen become masters.  Masters set the agenda and retain the most skilled work for themselves.  Whether we are talking about the training of artists by guilds in the 15th century or the training of physicists by universities in the 21st century, the Master-Apprentice model still exists.

Furthermore, when thinking  of Coyle, it highlights why the analogy came up. In one instance, this guild system produced a skilled hotbed for artistic invention—Renaissance art in Florence, Italy from the greats like  Da Vinci, Michelangelo, Verrocchio, Donatello and others.  Similarly, Hollingsworth’s paper suggests another discovery hotbed example, the Department of Biochemistry and Molecular Biology and the Department of Organismic and Evolutionary Biology at Harvard University in the 1950s, which produced an exceptional number of discoveries over a few decades.

I think this idea of institutional hotzones that produce famously skilled individuals will sound familiar to many.  I’ve been both on the outside of such zones and inside such zones over the course of my life.  Having been on the outside, I’m forced to ask myself, “what’s an outsider to do when you don’t have access to a discovery skilled zone?  When no master or mistress will accept you as an apprentice?”  Quite frankly, a large number of theories about how institutional skilled hotbeds arise and are sustained are informative, but the statistical truth is that most of us will either never have access to one or will not have consistent access to this option in our lifetime.  So, what’s a dedicated mind intent on discovery to do?

An alternative strategy that has worked for me is what I call the “powerful patroness” model, in contrast to the “Master-Apprentice” model.  I say patroness, as I’ve never had a patron in the sense I’m about to define, and I used “master” before because in my physics training, I’ve never had a “mistress” or a female supervisor.

A powerful patroness is an individual who involves herself in supporting the discovery capacity of another individual, even when the patroness and discoverer share no other common professional and discovery goals, by physically intervening on the discoverer’s behalf.  This intervention can be a conversation, a recommendation, advocacy, or funding, to name a few examples.

I have had at least five powerful patronesses in my time and, as a result of their contributions, I have been able to move back and forth among institutional discovery zones in the physics system, and, on occasion, been able to break inside from outside.  While it remains to be seen what impact this will have on my discovery track record in the long run, it is interesting to note that the most recent addition to my patroness pantheon is the late, great namesake for my current position as a Dame Kathleen Ollerenshaw Fellow.

Dame Kathleen Ollerenshaw was an astute mathematician and politician, who came of age in England during the World Wars.  She was instrumental in the founding of the Royal Northern College of Music in Manchester, devised an equation to solve the Rubik’s cube toy, and only recently passed away at the age of 101. All of which is only made more impressive by the fact that a viral infection left her deaf from the age of eight.  The fellowships supported by a trust and named in her honor are not strictly field or research agenda specific, but are competitively open to a broad array of researchers.  In a recent article on fostering discovery, it’s suggested that one way to support discovery is to support researchers, not research agendas, to allow for greater risk taking:

The sustained preference for conservative research, despite greatly expanded access… and the chance for greater rewards, suggests that institutional structures incentivize lower-risk research. For example, a young researcher pressured to publish frequently will favor incremental experiments more likely to be accepted by journals.

“If we want to push that risk, then we’ll have to change the recipe,” [James Evans, a study author] said. “We’ll have to reward at the group level, like Bell Labs did in its heyday, or fund individual investigators independent of the project, so they can intelligently allocate risk across their personal research portfolios.”

I am a proponent of multi-stream approaches, not just one mainstream approach, so I like the option of seeing all models at play – Master(Mistress)-Apprentice, powerful patron/ess, researchers and research agendas.  It seems that in pursuit of discovery, sometimes people are your greatest resource.

Representation (Not Rightness) Rules

Representation (Not Rightness) Rules

Which is a more correct representation of a beloved member of your life—an audio recording, a photograph, a video recording, a pencil sketch, a realist portrait painting, or an abstract painting?  That’s the question I keep asking myself every time I think about analogies, metaphors, and representations in physics.

The classic example of a representation challenge in physics is wave-particle duality:  do particles act like little billiard balls?  Or like waves moving through a non-existent medium?  The answer is they act like both.  The challenge is, as realities, they feel mutually exclusive.  But, as representations, the act as complements.  Each representation, either wave-like or particle-like, gives a framework for describing how a fundamental object, like a photon or an electron or a neutrino, will behave under certain circumstances.  Both representations are right in the sense that they will produce precise, numerical results that can be calculated and will match observed values.

In the same way, if I gave you a photograph of a close family member in my life to try and describe their behavior—how they interact with the world—you would gain one kind of understanding.  If, on the other hand, I gave you an audio recording of that same family member, the information would be complementary to what you learned from the photograph, but completely different.  Obviously though, we don’t cry foul and say, but how can the person be invisible voice waves and a static two-dimensional color object at the same time, and what does this have to do with their behavior?

That’s because we understand that they are representations of a thing and not the thing itself.  Of course, from an intellectual standpoint this argument is partly philosophical and psychological and has had volumes written about it.  But from a practitioner standpoint there’s no challenge: both representations are valid, and the combination gives a better understanding than either one representation alone.  In fact, in the close family member’s behavior analogy it’s easy to see that having more representations is better, because each added representation layers our perspective with additional understanding.

If I were trying to discover something new about someone else’s family member it might even help to force me to use different representations: an audio recording might tell me about how that person speaks or interacts with others, a photograph might show me that person’s physical characteristics and the kinds of events they participate in, an abstract painting might tell me what about that person most captures someone else’s perception.

In physics, having multiple representations of the same physical system can do the same thing, especially since most of our studies want to know about the behavior of something (its dynamics), but most representations are static (don’t move).  Words and math sit on a page.  Photographs sit frozen in a flat plane.  Videos sit in a flat plane and replay a sequence of still shots at high speed over and over.

At least with a living family member we can go meet them in person.  We can set aside the photograph.  We can ignore the voicemail.  We can turn off text and video messaging and go get all that experience in real-time, face-to-face.  Not so in physics.  The simulated photographs, the recordings, and the equations are as close as we will ever come to some members of nature’s family, especially in particle physics.  Biology, geology, and the social sciences, to name a few, have the advantage over particle physics, in that respect.  Though any investigations into the past are equally handicapped by lack of direct access.

So, it seems to me we need to accumulate as many representations and models as we can get our hands on.  Aim for a collage, not a pixel.  No one representation will ever be all things to all situations.  Because no representation will ever be the real thing.  By narrowing down representations to “the right picture” instead of generating representations to get “the right mix” we cut off a route to discovering something new.  After all, when we allowed both the wave and particle representations into physics we opened the door to countless previously inconceivable and undiscovered phenomena, like neutrino oscillations (the ability for a neutrino particle to spontaneously change particle type as it travels, which relies on quantum mechanical wave interference between its constituent parts).  When it comes to conceiving of the inconceivable, representation, not rightness, rules.

Base 10

Base 10

What does it mean to have a canon?  In English studies this is usually a body of texts that it’s assumed most serious scholars have read deeply and which somehow embody whatever characteristics or themes are deemed most relevant to a given perspective (e.g., a Western canon, a Shakespearean canon, a post-colonial literature canon, and so on).  In other words, the members of a canon act as pillars in the foundation of a shared body of knowledge.

In physics, we don’t really have a canon.  There are many famous historical papers and a few books and textbooks, but mandatory deep study and a shared list of “why these are canonical” (even if hotly debated) is not really in our culture.  There are perhaps, to a degree, canonical problems—physics problems everyone sees and attempts (recognizing that, ironically, who your “everyone” is will vary by sub-field).  These are most often presented in one of two groupings: by subject (mechanics, thermodynamics, astrophysics, etc.) or by math (differential equations, group theory, etc.).  Only ever so rarely are these problems grouped by core concept in any consistent way (perhaps Feynman’s three volume lecture series is the best example here).

My roving imagination and mind were hard at work again when I came across a piece about speed reading.  What captured my attention most was the emphasis to (1) first learn the technique, then (2) practice the technique for speed ignoring comprehension, then (3) practice the technique at speed with comprehension.  For a while, it has seemed to me that analogical thinking is a good test case for a discovery strategy applied to active, professional research.  But how to do that?

I have some ideas for how to synthesize a few operational analogical processes, which I’m hoping to work on with the help of master’s students this Fall semester.  But the speed reading piece reminded me that practice is key.  So how to practice?  Well, in English studies you practice critical thinking skills on the canon where you can compare your results with others, then you venture out into other non-canonical areas.  In physics our own canon is problems, so that means that to study and practice discovery strategies one will need a good discovery canon.  I’ve nicknamed the physics discovery canon I’m developing “Base 10.”

In my experience as an undergraduate student I always followed what I called “The Rule of 10”: practice a new math technique ten times before applying it to what you actually want to solve.  This was a necessary expedient since, by the time I started back in on my physics degree, it had been 5 or 6 years since I had studied the subject and I took the minimum number of courses (which meant little math) to get out of undergraduate and on to graduate school as quickly as possible (a money problem, not a time problem).

But of course, this rule of ten strategy also requires problems to practice on.  Hence, base 10 as a general rule for the number of test cases I need to try something out.  Now my natural inclination toward favoring analogical discovery strategies over others, combined with another math-inclined strategy known as “easy cases” (aka “toy models” where you keep the simple stuff and leave out the complicated details) has led me to believe that the standard groupings of physics problems may not be suited to my needs.  I need more conceptually useful categories right now, not categories that are mathematically similar or topic dependent.  It’s just a hunch, but worth an attempt.

So, I am slowly compiling my base 10 physics discovery canon to practice discovery strategies on.  The worst that happens is a little trial and error (technically, another discovery strategy which goes by the formal name of “generate and test”).  And if it doesn’t work out then, as I always tell my students, there’s a reason why it isn’t called “trial and success.”

Find Your ARQ

Find Your ARQ

Every good story needs an arc: a master strategy that drives all the action from start to finish.  Something that, for the writer, guides every word written with the knowledge that an outcome must be obtained: the arc must begin and end.  It cannot run off to infinity.  Are not research undertakings much the same way?  You must achieve some end, be it innovation, new knowledge, new technique, or discovery.  And, as the lead researcher, you must keep that in mind at all times, allowing it to guide your actions.

Of course, this is just an analogy.

But I am struck by how frequently analogies and analogical thinking appear in the literature on discovery.  In cognitive psychology some consider analogy one of the key problem solving skills in working at the boundary of knowledge. It appears again in research on problem solving where written, drawn, and even animated analogies have been studied.  It’s even factored into relevant concepts like design innovation.

So I find it a great irony that, despite general agreement that analogical thinking plays a role (and possibly a crucial one) in scientific discovery; despite the fact that analogy appears almost universally as a reasoning skill across cultures; despite the fact that analogy can be applied to any human knowledge domain and that analogy can use as source material any human knowledge domain; analogy is called in the technical jargon a “weak problem solving method” (in reference to its general domain use; versus “strong” methods, which are highly domain specific). If ever a bit of technical jargon did a disservice to its meaning, I think it’s here.

In physics we tend to marginalize analogical thinking as something handy for pedagogy, or public engagement, or just private understanding.  [Here I refer to conceptual analogy; not mathematical analogy, which is used heavily in physics.]  But we rarely envisage analogical thinking as a systematic, efficient, front-line professional research strategy for even “low hanging fruit” questions, let alone for the serious and risky business of discovery.  But the research suggests to me that it can be.

Which brings me back to the analogy of an arc.

Initially, I struggled to develop a clear strategy in pursuing scientific discovery.  In other words, I struggled to find my story arc.  But it now seems to me that the main task of scientific discovery and scientific inquiry is to explicate new analogies–to draw links between the known world and still unknown aspects of Nature.

If I define analogy as a way of identifying similarities between seemingly dissimilar things, isn’t that the very foundation of what we do in physics?  To say that an apple falling from a tree and a planet orbiting the sun are both acted upon and behave that way through the same concept of gravity is the mental act of finding similarity in the apparently disparate.  In fact, to say that any set of behaviors or characteristics, for any observable, physical system, can be explained by a common law, function, or model seems a pretty radical act of analogy to me.  So, each and every scientific research question is in some nuanced way an analogical research question…or a scientific ARQ, you might say.

There is still much more to this story of course.  There is a great deal of deeply thoughtful research about analogies—qualitative, quantitative, and operational.  Ideas about how to use it to communicate, to educate, to investigate.  Frameworks for how to define the relationship between the source domain of the analogy and the target domain where it is applied.  And on and on.  I believe this research contains some of the first scientific discovery strategies to formalize and try.  But this will not be an easy task.  Still, at least now I have the first step in pursuit of discovery…

The first step toward discovery is to find your ARQ.

A Good Map is Hard to Find

A Good Map is Hard to Find

The idea of mapping information is heavily used and widely favored today.  There are mind maps, geographical terrain maps, all manner of mathematical graphs to map relationships, and maps for “landscape analysis” used to summarize the state of the art in many fields.  But it turns out that when I look around the discovery literature a good map is hard to find.

Clearly I am biased (as evidenced by “Spark Point” and “The Idea Mill”) toward thinking about things in a map-like framework of (1) focusing on key points and connections, and then (2) refining and re-articulating those elements into a nice, neat shareable package.  At that stage, to me, the map becomes an externalized physical model that can be manipulated and played with, letting you toy with the underlying knowledge cluster sketched out by the map.  And going back to “The Physicist’s Repertoire”, if scientific discovery involves both content and skills then one might want at least one map outlining each arena.  So what kind of map might I use?

Mind maps are the easiest choice—free software or pen and paper, associative thinking, unconstrained.  But mind maps are so free form that the permutations are endless, making it hard to assess if adaptations of the map are fruitful; there can be too many options to try.  Luckily, I came across two other maps that seem to me to have more promising bones.

One is called a “territory map” from Susan Hubbuch’s book Writing Research Papers Across the Curriculum.  It lays out central points in a topic, the hierarchy of points, the direction of ideas between points, and the relationship between points.  This may just have been devised as a drafting device, but it strikes me as a potential foundation for a research tool.  If one laid out a set of knowledge, like scientific discovery skills, as Hubbuch suggests then you would have a territory map representing what is known, perceived, or believed.

Then you could play “what if?”  What if a given sub-hierarchy changes, or a directional was reversed, or relationships were added or subtracted?  Now since Hubbuch’s territory map also has built into it a “beginning” and an “end” (again, it’s designed for drafting a paper with an introduction and a conclusion) then that means there is an overall flow from foundation points to supported conclusion.  So, in a skills map, could this flow run from actions taken to supported outcomes?  In other words, could it be fashioned into a draft of a decision-making tool (more usually called a decision tree)?  If so, it could be a powerful way to articulate and refine scientific discovery paths.

Another possible type of map comes from Sanjoy Mahajan’s The Art of Insight in Science and Engineering, in a chapter outlining the technique of using “easy cases” to reduce complexity in order to foster insight.  The author calls it an “easy-cases map” and it’s essentially a flow chart showing the change of a wave equation between ocean regimes and the physical meaning of each regime.  It caught my eye because I once studied the reflection of sound waves, for submarine sonar under various ocean conditions, as part of a high school internship.  And I never felt I actually grasped the relationship between domains of different ocean conditions.  Where was this map 20 years ago?!  Better late than never I guess.

Mahajan’s map-like synthesis, especially between regimes bounded by some key variable or other (which is all-pervasive in physics), strikes me as so potentially useful.  Mahajan’s mathematical map is very much the counterpart to Hubbuch’s conceptual map.  The more variations of either map you have, for the same question or discovery goal, the more you can explore.  Because once something is mapped then you can compare maps for similarities and differences—it’s a powerful multipurpose abstraction.  The key would always be to capture the most “useful” features in a map, so that the meaningful similarities and differences that can act as a spark point for discovery jump out at your perception (which is much faster than cognition).

For now, I have started drafting my first map of discovery strategies and also one of open questions in neutrino physics.  The process will surely be iterative.  But who knows: I may find that the act of mapping and iterating itself will have a part to play in my pursuit of discovery, and in any case, when you’re out pioneering you can never have too many maps.