Big questions, bright futures

| April 21, 2018

One of the best things I’ve done as Chief Scientist is to sit on the judging panel for the final of the Three Minute Thesis.

It’s exactly what the name suggests. PhD students get three minutes to explain their thesis. We judge them. That’s it. No costumes. No props. No audio. No animations. No rapping. No singing.

It tells you a great deal about PhD students that all of these things are specifically banned in the competition rules. Just a person. On a stage. Talking. About a thesis. One static PowerPoint slide in the background. As raw as it comes. And the time limit is savagely enforced.

How long is three minutes? About the recommended cooking time for a toasted cheese sandwich. For those who think in pages, picture one A4 page in Times New Roman font. Size 12. Standard margins. It is hard enough to keep it to three minutes when you only know a little about your topic.

It is incredibly difficult to know your topic so thoroughly that you can whittle it down without mauling it to death. But that’s exactly what every one of the ten finalists did. They were fiercely intelligent, highly entertaining, and absolutely compelling.

As the session wore on I found myself getting almost resentful. Couldn’t just one of these presentations be a little less worthy of winning, so it wouldn’t be so impossible to judge? But no-one obliged. No, they were determined to be uniformly brilliant. Darn them.

But I suppressed that unworthy and ignoble thought.

“Alan, you’re the Chief Scientist: just be grateful that these people are advancing knowledge instead of being lawyers billing their clients in three minute blocks. Be grateful that they’re doing PhDs.”

And that got me thinking about the question that brings us here today. Let’s assume that in every generation we get a certain cohort of talented, driven people. In this generation, we know that many of those people will choose to do PhDs.

By any measure, it is an enormous investment of our society’s potential. So the question always has to weigh on us. What does it mean to be worthy custodians?

Today I want to talk about the foundation: culture and ethos. I want to reflect on what it means to be a twenty-first century scientist. And in the traditional way, I have framed my thoughts as the advice that I would give to PhD students, striving to excel.

Now I know that advice – unlike funding – is one of the very few things that PhD students tend to get in abundance.

This year, in particular, I have seen many articles to the effect that “science is broken” and so the best advice you can give to a person contemplating a PhD is “don’t”.

That won’t be my advice.

First, I reject the hypothesis that “science is broken”. It suggests that science was somehow not broken or at least significantly better at some point in the past. And I just don’t see it. Point me to the period in human history where we had more brilliant people or better technologies for doing science than we do today.

I agree, we certainly have our frustrations, like every other profession. But point me to the era when scientists were always courteous to their colleagues, and good at explaining themselves to other people, and were given all the support they could possibly need to pursue rich personal lives alongside stellar academic careers.

Point me to the world were science was open to people of all backgrounds, and science was not misused by people with bad intentions. Point me to the society where evidence always ruled the public discussion, and every time a scientist spoke the news cycle stopped, so that journalists would have the time to pay attention.

That Golden Age of Science never existed. The only place to find it is in the future. So let’s not tell you young scientists that “science is broken”. Let’s encourage you to help us make it better.

The second reason I reject the “avoid at all costs” school of advice: I see so many ways that science delivers the goods!

Here’s just one. One the most helpful things about being a modern human is satellite navigation. Well, satellite navigation is basically clocks in space – atomic clocks, on satellites, calibrated against the atomic clocks in national standards laboratories.

They are very accurate. But if you’ve tried to navigate with your smartphone in inner-city Sydney, you’ll suffer from the problem called multipath interference: signals bouncing between buildings, causing such havoc that poor Google loses it.

‘Reasonably accurate” won’t be good enough for a city of self-driving cars. Just imagine it. The solution is to install an atomic clock in every car and smartphone satellite navigation system. But today those clocks are monsters: the best ones fill up rooms.

But what if we could shrink atomic clocks down to the size of a computer chip, so we could actually embed them in smart phones?

That’s what researchers today are attempting. Instead of trapping an oscillating atom in a giant vacuum chamber, they have trapped it in a tiny atomic cage. A single, spherical molecule of carbon, with 60 atoms, properly named a fullerene, informally known as a buckyball after Buckminster Fuller, the architect who developed the geodesic dome.

That single carbon molecule encases a single nitrogen atom and protects it from the environment. The nitrogen atom is stimulated by a laser beam to oscillate at its resonant frequency and presto! A microscopic atomic clock.

Science doing its thing.

From an idea put forth by Lord Kelvin in 1879 to the first atomic clock built seventy year later in 1949. And then another seventy years to go from a monster to a molecule. The Space Age and the Information Age bound up in one.

I look at something so astonishing, and it seems to me that anyone with a pulse should be excited by the possibilities of science in 2018.

Of course people want to do research! They should want to do research! And if they have the passion and the talent to undertake a PhD, then who am I to dissuade them? So, in that spirit: my advice to the next generation begins with this first principle.

A relentless commitment to quality

I was lucky to train under a great scientist, Steve Redman. These days we would describe him as unproductive: he published, at most, two or three papers each year. But every one of those papers was deeply considered, included both a hypothesis and supporting data, was meticulously crafted and, as a result, deeply influential.

The quality of the papers was simply the mark of the way Steve managed every aspect of his research, right down to building the research equipment. In everything, quality first.

These days the pressures and incentives are very different. We have a whole taxonomy of the ways that systems can encourage or enable good scientists to go wrong.

HARKing: Hypothesising After Results are Known.

P-hacking: torturing your data until it screams.

The file drawer effect: selectively publishing only the interesting data.

Pseudo-collaboration: assigning credit where credit is not due, to so-called “non-contributing co-authors”. Yes, it’s an oxymoron – but we all know, it’s been done.

Only academics could develop such a comprehensive field-guide for misbehaviour. They range from the inadvertent to the deliberate. Look up these clever ways to do bad science and know thy enemy – because it would be naïve to suggest that the pressures aren’t real, or that only bad people fall prey to them.

The lesson I take from Steve Redman is that we all need to commit to quality – consciously. Constantly. It’s not necessarily instinctive: but it needs to be ingrained. Your PhD training program should teach research quality through specific coursework. In everything, quality first.

Of course, it’s much easier to be relentless if you’re doing something you love. And that brings me to the second principle.

Know your limitations

I had a very enjoyable time as a young researcher making machines to monitor the electrical activity in… wait for it… the brains of snails. But I couldn’t help but notice that my own brain wasn’t wired like the brains of the people around me.

They got excited about their results. I didn’t really care about the results: I just wanted to make the machine for running the experiment. Then make it better. And better again. So I realised that I could be a deeply unfulfilled scientist – or a passionate engineer making it possible for other scientists to make important discoveries.

I chose the latter.

But first, I had to redefine my difference from the scientists around me: not as a weakness, but in another context, something that could be a strength.

Today I have no patience for people who tell me that a person with a PhD who starts a company, or goes into the public service, is a waste of a good academic researcher. The purpose of a PhD is to allow talented people to develop their strengths and choose their direction.

I was lucky, again, to have the guidance of exceptional colleagues and mentors, some of them researchers, some of them with experience in business. But that was the sort of luck I made for myself: I sought out those people whose advice I knew I could trust.

That brings me to Principle Three:

Be a generous listener and sharer

We have told ourselves for aeons that science has a problem with silos. As Chief Scientist, I have a birds-eye perspective. I’ve met scientists from the same university, the same department, sometimes even the same corridor, who have simply never spoken.

So, much of my work is matchmaking.

But I have the benefit of distance. From a distance, you can often see the patterns that are hidden from the people working up close. I know that for PhD students in particular, the research life can be isolating, anxious, and all-absorbing. We need to encourage the habit of conversation, not as a sideline but as simply what good scientists do.

That doesn’t mean it happens by default. Like committing to quality, it has to be a conscious choice. So go to seminars and speak to the attendees. Walk the corridors and see not just what is there but who is there. The pay-off for a young researcher who makes that commitment is good advice – and just as importantly, new opportunities.

Be open to opportunities

This is my fourth principle.

The Nobel Laureate Richard Feynman had a nugget of wisdom that he would hand out freely to young researchers. Come up with a list of your twelve favourite problems. Keep them constantly in your mind: present, but dormant.

Then, every time you come across a new research tool, or an interesting discovery, test it against your twelve problems and see if it helps. It’s amazing, he said, how many people will marvel and say “he’s a genius!” if you just look methodically for opportunities.

Feynman’s advice is an application of an old maxim that I took with me when I left Australia to start my company Axon Instruments in California. “Chance comes to the prepared mind.”

These days I prepare my mind to look for people with interesting stories. And it’s amazing: I never meet a person who hasn’t got one!

In the last month alone, I’ve met with the founders of Gilmour Space. One was a banker for twenty years. The other was a marketing graduate. Now they’ve raised $5 million to launch small satellites into low earth orbit using the world’s largest single-port hybrid rocket engine.

I met with the founders of Tritium. They got their start in the World University Solar Car Challenge. Now they employ 130 people making fast chargers for electric vehicles, and those chargers line highways all over the world.

I spoke at a forum on the same day as Dr Catherine Ball. She completed a PhD in spatial ecology. Now she’s a leader for the global “drones for good” movement, focused on the use of drones for humanitarian work. She’s the co-founder of the first global drone conference, and in her spare time she’s on a mission to give 100,000 women and girls in Australia the opportunity to fly a drone by 2020.

It seems to me that very few people get to interesting places by doing conventional things. No: they get there on the trail of opportunity.

So, four pieces of advice:

  • Relentless commitment to quality
  • Know your limitations
  • Be a generous listener and sharer
  • Be open to opportunity.

So let’s now think about how we can all inch science closer to that future Golden Age. And I’m going to follow my own advice, and take a cue from Richard Feynman. In that spirit, I’m setting out a few big questions. Test out the ideas and approaches above to see if they advance us any further in solving the problems below.

Here, in no particular order, are some of the things that I’ve been thinking about.

The future of the scientific paper

The Atlantic magazine has published a provocative essay: “The scientific paper is obsolete”. It’s done great things since it was developed in the 1600s. And today we could certainly say that production is booming.

But the peer review system is critically overloaded. Page charges are high, and so the critically important methods section is left out. Alternatives pop up overnight because the barriers to entry are low. And the irony is, we’re working so hard to generate papers, we don’t have time to read anybody else’s.

One has to ask, have we hit Peak Paper?

My tentative response is no: the scientific paper has endured for a reason, and it still holds. It’s an efficient way to structure and communicate information. But what do you think? Will we still be publishing papers in 2050? And how else could we do it?

The pressure to publish

I spoke of my ‘unproductive’ supervisor Steve Redman. I think we would all agree that publishing a few articles a year is the ideal. Authors could invest more time in their papers, and peer reviewers could invest more time in their critiques. In the real world, we know that the incentives often skew the other way.

But where do you intervene to break the cycle? I recently saw a radical suggestion: a lifetime word limit for researchers. I suspect it would be very difficult to enforce. But what about a variation: change the focus from publications to CVs.

For starters, let’s contemplate a rule that you can only list a maximum of five papers for any given year when applying for grants or promotions. And, your CV would have to list retractions, with an explanation. And, on the recommendation of Jeffrey Flier, the former Dean of the Harvard Medical School, candidates for promotion would have to critically assess their own work, including unanswered questions, controversies and uncertainties.

One to consider.

Better incentives for thankless work

Should we have dedicated funding for replication studies?

Should we consider awards for high-quality studies that yield negative results and don’t confirm a hypothesis or previous finding?

It’s been tried in some disciplines – could it be done at scale?

Predatory journals

If journals are the gatekeepers, then predatory journals are the termites that eat the gates and make the community question the integrity of the structure. How do we fight back? And how do we arm people in the community who aren’t scientists, and don’t know anything about impact factors and journal rankings and editorial standards, to recognise quality?

Is there an analogy to fair trade coffee: a stamp that consumers can look for on the product that demonstrates it complies with a certain standard? Could we promote an “ethical journal” stamp?

Artificial intelligence

Bloomberg reports that there are now five ways to command a multi-year, seven figure salary. It used to be four: CEO, banker, celebrity entertainer, professional athlete. The recently added fifth is “a person with a PhD in artificial intelligence”.

This is the AI century.

Like all great waves in technology, it breaks on researchers first. Time and time again, you get the future – you make the future – before it sweeps over everyone else. But what does it mean for research training? What roles that scientists do today, will robots do tomorrow? What roles that no-one can do today will become possible, with the power of humans and robots combined ?

It’s a fascinating question. And one that lends itself to many, wonderful, insightful PhDs. Is there one in your future with your name on it?

I am well beyond the three minute mark. So let me conclude with the immortal words of the immortal doctor – by which I mean, of course, Dr Who:

“A straight line may be the shortest distance between two points, but it is by no means the most interesting.”

Dr Finkel delivered this opening address to the Quality in Postgraduate Research Conference in Adelaide on Tuesday 17 April.