Physchosis: How academia destroys your mental health.

Apparently I have one of the best jobs in the world. Yeah, I used to think so too. But it’s reached the point where I feel more like a slave, dreading getting up in the morning. I’m far from the only one. And when you see twitter academic parody accounts talking about guilt for not working on a Sunday night, and know that certainly it rings true for many of your colleagues, it’s clear there’s a serious problem.

I want to keep this one short, so here’s what I see as four key causes…

1. The relationship between what’s expected from you and the resources you have available to you are ridiculously non-linear: The resources available to two different researchers can be massively different. The obvious difference is budget, which can easily differ by three orders of magnitude, but there’s also time, which to some extent scales with budget as you need money to have people, access to infrastructure, access to ‘networks’ which in turn brings papers, invited talks, etc., and demands on time in the form of teaching load, service load, etc.

When you add all this up, it’s easy to have two people in the same department. Prof. A with a $10M p.a. annual budget, 10 postdocs and 30 students all producing papers with their name on the end of it, a near zero teaching load, easy access to the best papers, conferences and collaborative efforts. And Dr B with a $10k annual budget (if that), 2 Ph.D. students, a huge teaching load, and closed doors that must literally be kicked down to get outcomes that actually count.

That there can be orders of magnitude difference in opportunity is a fact of life that’s tough to take but survivable. What’s truly destructive is that the expectations in output don’t differ by orders of magnitude, in fact, they are often more or less the same. Even with almost nothing you’re still expected to produce high-impact papers, get invited talks, pump out seas of graduating students, on top of being excellent educators and writing a disproportionately larger number of funding proposals because most get rejected for funding.

The fact that many of us still do achieve this actually suggests that these folks with the much better resources are the underachievers. For every top paper your ‘breadline’ researcher produces, the ones with huge resourcing should pump out a hundred or a thousand. Few do and that’s fine. But you’ll never see management congratulate the breadline researcher on their one top-paper a year despite the adversity. Nope, they’re forgotten, or worse, told to lift their game, to aspire harder to excellence, as if they can conjure up more out of the almost nothing they have to work with in comparison. All the spoils go to the few at the top.

It doesn’t take long before giving your all and more for what feels like nothing in return destroys your desire for the job. It’s hard to go home every day, feeling like an underachiever or an unachiever, when your level of effort in any other job would probably see you as the top employee in the company.

2. Everyone has to be a chef and all the cooks are useless: There’s this crazy obsession in academia with ‘leadership’. I lead this, I lead that, nothing matters unless you’re the leader… us academics have even conjured up the most ridiculous of bullshit terms around: thought leader. You know it’s really bad when you mentor junior colleagues applying for grants or promotion or whatever, and have to tell them to remove statements about doing ‘behind the scenes’ work on projects and instead spin the whole thing as some kind of leadership. You know it’s really bad when people in teams squabble over who will be the ‘leader’, or cut it into little pieces they each can lead, and people don’t want to contribute unless they can claim a formal leadership role of some part of it. You know it’s really bad when everyone finds some little thing they can be director of, just do they can, you know, lead and stuff.

How it hurts is that much of what you do is not actually ‘leadership’ and what science, or any other job, needs is people who will do the hands on work. You take 10 foremen and put them on a worksite, does a hole get dug? No. You take 10 celebrity chefs and put them in a kitchen, do the dishes get washed or the potatoes peeled? No. Suddenly all the apparently mundane but nonetheless essential tasks become worthless. Why do them at all? Sure, we get paid, but in our job we don’t do things for the money, we do it because we get joy out of contributing. And when your contributions are worthless because they aren’t leadership, then why get up in the morning? When you’re told your teaching is good enough if students aren’t complaining, and to stop wasting time that could be spent on research instead, then why get up in the morning if you derive joy from teaching well? I could go on, but when most of your job doesn’t matter and isn’t recognised, then how can you expect someone to feel good about it?

3. We’ve perverted all the adjectives to the limit: You know you’re an academic when someone says something is pretty good, and you immediately know that’s code for mediocre or even crap. Good is terrible. Excellent is barely acceptable. Outstanding is good but rarely achieved. All through the culture you keep having these perverted objectives thrown at you — all the infrastructure needs to be world-class, the ‘pursuit of excellence’ is persistent and all pervasive. Journals and awards and universities can’t just be, they have to be prestigious or top-ranked. The superlatives are endless and excessive and they ultimately destroy your ability to retain perspective.

In many ways, it’s like having advertisements for luxury goods shoved in your face on a daily  basis. Before long, your nice old Seiko watch looks like total junk in comparison. With advertising, keeping consumers in a persistent state of dissatisfaction keeps them spending. In academia, it seems as though keeping academics in a persistent state of self-dissatisfaction and ‘underachievement’ keeps them working longer hours for and with less. The side effect in both cases is destructive mentally.

4. The culture of ‘only brilliance’ matters: A common thread in physics is the idea of meritocracy — that your success and value should be determined by your achievements as a scientist alone. There are lots of problems with this myth and many others have written about it. The one aspect that’s particularly destructive is that it makes a system that a) tolerates monsters and b) it strips your other qualities out of you as they don’t contribute to your success.

The former is easy and everyone knows examples, from the extreme, e.g., institutional responses to things like #astroSH, to the benign but still destructive, e.g. “At times he broke in on the initial sentence of the talk, refusing to let a speaker proceed until the point was clarified. Sometime clarification never came; I once witnessed the humiliation of a visiting postdoc who was forced to defend the first sentence he uttered for the entire hour and a half allowed for his seminar. No one dared restrain X.X. At first I imagined that his rigorous questioning was the by product of a pure search for knowledge and truth. Later I began to detect a latent glee with which he savaged the imperfections in other people’s talks. He enjoyed disorienting them.” [1] Often these monsters are exploiting their advantage/privilege via 1. to do these things and ruin the workplace for others.

The latter is more difficult to define, it is nebulous, stealthy and sinister like a slow-growing cancer, it is the system stealing your soul. Before long, you find yourself losing your sense of humour, losing your ability to relate to people, losing your ability to relax and not freak out in the fear of what more will land on your plate when someone knocks on the door or the phone rings. You find yourself slinking around the corridor, hoping no one stops you. You find yourself not listening to conversations because your brain is processing your to-do list, or not listening in talks and meetings because you’re on your laptop editing proposal text or writing an email or doing some ridiculous admin task. You find yourself avoiding holidays or activities with family and friends because you need to get job X done or deadline Y is approaching. You work on your down time, hell, you work all the time. Gradually you become a milder version of the monster that gets created in a system where the only thing that matters is your achievements, metrics and cv. It destroys you as a person and makes you start hating your job and hating your life.

When you add it all up, what you have is the perfect recipe for destroyed people, hollowed out souls who are nothing but work, people who should love their job but spend many days on the edge of falling to pieces, feeling like they’re worthless, deeply suspicious of their colleagues and how they might shaft them in the relentless competition for acknowledgement, which is designed by the system to be the only thing that provides job satisfaction. It’s the perfect recipe for mental dysfunction.

How to fix it? That’s harder and might be for another time…

[1] Emanuel Derman, “My Life as a quant”, Wiley NY 2004.

The Well-done Westie’s guide to being an awesome undergrad

Today was ‘info-day‘ at my university, and I spent a couple of hours manning the physics table, answering questions for very recent high school graduates looking to come to our fine institution for their studies. I often do the one we hold in September also. They are always an interesting experience. First there’s always the conflict between what the student wants, or thinks they want, what the parents want, and what they can actually have. Second, there’s always the conflict between what the University wants to market itself as and what the reality of university is. These two conflicts interact, leading to everything from unrealistic expectations to bad life decisions that end in fall-outs with parents and dropping out with a massive HECS debt.

I feel a strong call to be really honest with these kids. So I often like to pepper my formal advice with some informal advice, often with hushed tones of ‘don’t tell anyone I said this, but…’ or ‘what the marketing folks don’t want you to know is…’

After many years, I figure it’s time to share some of this more widely, and I’m going to be just as bluntly honest here too. Much of it comes from my own experience as an undergraduate in a time when kids from my kind of background often didn’t go to university, but is still relevant to many today I’m sure, since I teach kids in first year who clearly don’t know this stuff. So for better or worse (possibly worse if my Dean or Head of School ever sees this😉 ), here’s my ‘well done westie’ guide to being a totally awesome undergrad.

1. No one gives a rats about you*: “You are not a beautiful or unique snowflake. You are the same decaying organic matter as everyone else…” I’m going to start with the harshest truth — Your professors still get paid the same whether you pass or fail†. It makes no difference to them. Of course, they’d love you to work hard and pass their course, nothing makes them happier. But if you don’t turn up, don’t work, don’t submit your assessment, flunk the exam, they won’t shed a single tear. Your professors are valued by the system for their research (see #12). Their teaching is secondary and as long as their fail rates aren’t absurdly high (>50%) or there’s massive raucous complaint about them, the failed students matter for naught.

Hell, in the good old days, in some courses, a 50% fail rate was par and too high a pass rate meant too easy a course and ridicule from colleagues. I have fond memories of my first 2nd year higher complex maths tutorial where we were told “This course has a 50% fail rate. Look at the person next to you… one of you will fail this course.” So, accept this as it is and count your blessings for being the students of today…

This is the first major difference to high school and the sooner you realise it the sooner you will succeed. The onus is on you to manage your own learning now, no one is here to hold your hand. It is a great thing because it teaches you independence. It can be dangerous for you because if you don’t, you will waste a lot of time and money on nothing.

Spend time working out how to manage your time effectively, coordinate your assessment tasks so they don’t end up as last minute all-nighters, and teach yourself new things from textbooks, some of which, are almost unintelligible. Learn how to think, reverse engineer, experiment, extrapolate, use logic, argue, write, reason, etc.

2. Cut the umbilical cord: The biggest mistake I see incoming undergraduates make is that they choose their courses or degree because a) their parents made them or b) they think it’ll get them a job. Doing something you don’t like is a guaranteed path to failure, so make sure you are doing the degree/courses because they interest you and no-one but you. Don’t let Aunty X’s comment about ‘you’ll never get a job in Y’ stop you — if you love doing something enough, you will put in the 10,000 hours to be good at it and you will find a way to make a living out of it in the end even if it’s a little bit away from what you first thought it’d be (reality might stop you eventually, but worry about that later, trust me on that).

Personally, I think cutting the umbilical cord should start with info day (i.e., you should come and leave your parents at home). As much as I love meeting your parents and guessing what you’ll look like at 40 or 50, it’s time to start being an adult, making your own decisions and living with them. Going to my own info-days solo was bloody scary but I’m glad my Dad made me do it (including driving solo to ADFA and back more than once at age 18). Still talk to your parents about what you want to do, they need to feel involved in your life, but you make the decisions, ok, it’s your life and you should be living it, not letting them live vicariously through you.

3. Watch for weasel words: Universities are businesses, and in this modern era, they will throw their full marketing arsenal at you. Like any business, they are trying to manipulate you into their custom over the options of their competitors and buying nothing at all. When you are 5, you get sold your McHappy meal with a free toy. When you are 15, you get sold your clothes and stuff by knowing that Kim Kardashian or Kanye West wear them. The key trick the universities use are what I call the ‘weasel words‘. Hey, why get a science degree when you can have an ‘advanced’ science degree? That other university? Well it’s nice, but look at our world-class facilities and exemplary thought-leaders who are the pre-eminent agenda-defining researchers of their generation. Blah blah blah… I’m not a commentator, but welcome to the smoke and mirrors game, kids, you’ll be masters of it by the time you’ve finished your second post-doc, I promise you.

Before you even go near a university, think about what it is that you want to do with your life, what things you want about a university that you won’t budge on, what things you want about a university you will budge on. Work out what really matters to you: for some it’s prestige and ranking, for me it was a mix of quality of education (UNSW beat Macquarie, but also for commute distance) and a good culture that I could see myself fitting into (UNSW beat U. Sydney hands down). Then, and only then, start seeking the right place, and in doing so, always look to cut through the marketing shpiel to the heart of what’s going on. Ask lots of technical questions (‘What makes you the best uni?” is not a technical question, ok, it just demonstrates your lack of imagination), and look for the people who are giving you honest answers about those details (hopefully life and good parents have told you how to pick these people).

When there’s a weasel word attached always ask yourself ‘What’s in it for them and what’s it costing me?’ Sometimes it’s in your favour, fine, go with it, don’t become a cynic (like me). But don’t walk into this blind either. Sometimes it’s stacked to their benefit over yours largely, and you have to spot those situations. Remember, you’re accruing a huge debt for this education, make sure you get what you want and need for it.

4. Stay flexible: I went into first year (via a convoluted route) wanting to do astronomy, by the end of 2nd year I hated it and wanted to do anything but. If this happens, don’t freak out, this is ok and normal. All through your life your interests will change and you will grow as a person — I would be more concerned if it doesn’t happen (if so perhaps you’re destined to be wearing NB 407s in your 40s :P). My advice to any student at info day is to maintain your flexibility in your degree program as much as possible. Delay all decisions that limit flexibility and lock you in as late as you can.

Don’t be afraid to tailor your degree as you go along and don’t feel like you have to plan every course you’re going to do before you start first year, although this can be a good exercise actually, just to get an idea of what is coming and how it makes you feel.

It’s not ok to be fickle and change enrolment every 5 seconds, that’s stupid, but if you get 2 years into a degree and hate it, don’t keep doing it because you feel you now have to. Follow the subjects you love and the professors that inspire you, that’s where the value in your education is. That brings me to…

5. Teaching quality varies, value the good professors: A corollary of professors being valued for their research (see #1) is that some are truly shocking teachers. On the flipside, some are truly fantastic and your time in their class will change your life. The cruel thing is: we get paid the same whether we teach well or teach poorly. Worse, the ones who teach well are often sacrificing a little on their research to do so (see #12), and this can often cost them career-wise. They do this because they care about making a difference for the next generation and you should value it. Put effort into their courses because seeing you succeed is about the only reward they will get from their effort, but it is a reward worth having. They also feed off your enthusiasm — if you are engaged, they will be engaged, the more you put in, the more they put in. There’s nothing more enthusiasm-killing for a lecturer who actually cares than a class full of dead fish.

Note well two things: a) this doesn’t invalidate #1 above at all, ok. Don’t mistake a professor that cares about the whole class for a professor who is willing to spoon feed you specifically. b) Don’t think that ‘good’ equals ‘always nice, easy-going and generous with marks’. Sometimes to be a good teacher you need to be tough on your students, you need to have high standards, expect quality and express disappointment in sub-standard efforts. Much like a good parent, a professor is not there to be your friend. You are paying them to make sure you’re prepared for the workforce.

6. Be willing to get bad marks, take criticism and learn from it: The students I respect the most are the ones who tank an assignment or exam, and rather than come begging/arguing for marks, go ‘it is what it is, what’s the one thing I could have done better that would have made the biggest difference?’ Sometimes you won’t even need to ask this (so don’t, our time is precious — see #12), just work out how to fix it.

The students who excel most at uni are the ones who learn early how to go about improving themselves. I teach 1st year electromagnetism and in my first lecture, I show my first year grades (see below).


Mediocre, huh. Right now, you’re probably thinking: How the hell did that guy get to be a professor? Easy, I worked out how to adapt to university, how to teach myself, how to get good marks in assignments, how to cope with and do well in exams. My marks were much better by the end of 4th year. Every assessment task didn’t end when it was submitted; it was forensically investigated on return to work out how to not make the same mistakes next time. This is the only way to get ahead.

7. Don’t sweat first year: The natural follow up to #6 is to not sweat your first year too much. I hate to say this, but your first year marks will to some extent be a function of your past educational opportunity (note that ‘some extent’ means ‘some’ and ‘not entirely’ ok — you can still influence them with your level of effort, so don’t use this as an excuse to slack off, ever). If you went to a ‘top school’‡, then you will likely be better prepared for uni and first year will be easier. If you didn’t, then you will probably be playing catch-up and your first year marks will suffer accordingly a bit in comparison. Don’t let this fool you into thinking you can’t compete with the better positioned students or get you down — instead use some of your spare time in first year strategically to master the new system you’re in and save your strength for 3rd and 4th year where it really counts. If you can get #6 figured out you’ll gradually catch up the competition, and enjoy watching the filthy looks they give as some ‘rough looking westie kid’ (or you) swoops in and steals what they feel is rightfully theirs. ;P

8. Learn the value of sufficient ‘quality’: Every incoming undergraduate should be forced, at gunpoint if necessary, to read ‘Zen and the Art of Motorcycle Maintenance‘ by Robert Pirsig. It is a great book, but what I really value it for is it’s on-going meditation on quality. When you come into uni it’s easy to fall into what I call the ‘correctness fallacy’ — the idea being that as long as the answer is right, it doesn’t matter what it looks like. Students in this trap get to the point of their assessment submissions being some badly scribbled disorganised crap scrawled over some food-stained papers torn from a notebook at the last minute. When they don’t get top marks for this, disappointment and anger follow.

Look, when you buy a car you expect it to drive but you also expect it to look decent. It doesn’t need to be a Ferrari but it can’t be a chassis with a motor and a single seat in it either. If you want good marks, go for something classy, stylish, not too expensive or ostentatious, but with an obvious eye for detail and quality.

Learn how to produce quality quickly, and learn when to stop (80/20 rule).

9. Don’t be a nigel; make friends in higher places: I don’t mean your professors; if they are being true professionals they should be rather aloof until you are a semi-permanent member of their research group (honours min, probably Ph.D.). What I mean here are the students in the year above you, of course! The one blessing I had as an undergrad was that some of the folks in the year above me were great fun to hang out with, more so than my own year with a few notable exceptions. If you hang out with the year above, what you get is a sneak preview of what’s coming in the next year, and you can use this to make really wise decisions about what your next moves are gonna be. Everything from which lecturer sucks/rules to what the ins-and-outs of given courses are can be obtained in this way, and you can use this information to your benefit. Hey, if you have a certain naive Russian physics lecturer, you might even catch them giving the same exam two years in a row, get the paper from last year’s class, spend a few days solving it, and walk into the exam knowing all the answers…😉

10. Before you enrol in a course, consider who is teaching it: If it’s a core course there may be little you can do to avoid it, but at least you will know that you’ll be teaching it to yourself and to factor the time in for that. Those of you who learned to do this in high school will now have the advantage — this will inevitably be people who didn’t go to a ‘top school’ and therefore sometimes had useless teachers during HSC (see #7).

If it’s an elective, the lecturer sucks, and missing it is not going to be a ‘job-stopper‘ on competency for employment, then skip the course. You can always learn it later independently if you need it (see #6). If it’s an elective that’s kinda interesting and the lecturer is a star, then hey, why the hell not? You might find a new subject to love that you never anticipated, and if not, then at least you’re gonna enjoy the course.

11. This is not the HSC, do the hardest courses you can: One of my colleagues has a great philosophy when it comes to uni students, particularly those doing the sciences (if you didn’t detect the bias towards science, wake up) — don’t respect marks, respect how hard the courses they can pass are. I agree, a credit in 2nd year higher complex maths beats a HD in the normal course. Push yourself to do new things and hard things, it is the best way to learn and broaden your horizons. If you focus on just getting high marks from easy courses, you’re robbing yourself.

When it comes to getting a job, sell the level of the courses and your commitment to pushing yourself by undertaking them. Get an academic to comment on this in your reference letter. I guarantee you, any employer worth their salt will value someone who can demonstrate the ability to push themselves outside their comfort zone even if that compromises their scores a bit over someone who just keeps doing the same easy stuff to get high scores.

12. Don’t forget to have fun: Sometimes you will have to work really hard, the semesters of your honours year are a really good example. Your survival and sanity will depend on your ability to manage time, because this will mean you can still fit the essentials of life in next to getting excellent work done. The essentials are: a) getting enough sleep, b) getting enough exercise, c) having some fun and laughs. Do not compromise on these long term, you will go crazy and destroy yourself. To make c) easier, see #8 and expand to the people in your own year and the people in the year below. Some of my best fun at uni was during the most stressful times, just letting off steam with fellow students.

13. Realise professors are stupidly busy: It might look to you like all we do is stand in front of a lecture theatre for 3 hours a week and then retire to our offices and surf the internet. The reality is quite the opposite. Because we are valued for our research not our teaching (see #1), when we are not teaching we are doing research and it is something that is incredibly time intensive — Essentially we have two jobs, one full-time and another part-time. Most of your professors will instead be closer to insane workaholics than lazy bums. They will have their own priorities, and to close the circle on #1, you are rarely high on that priority list. Respect their time — don’t be afraid to use it when you need it, but be conscious that it’s competing with other things — they will really appreciate it and respect you more in return.

14. You will probably not become a professor: Too many students either come in thinking they’ll become a professor or, by the time they’re done, lack imagination for the job market and decide they’ll become a professor. I can see why, it’s a nice job, has it’s ups and downs sure, but it’s interesting, has lots of travel and pays decently.

Sorry to be the bearer of bad news but it’s a saturated market. Let’s assume you complete your undergrad and then do a Ph.D. too. After 8 years of uni (minimum) you will be on the track to academia, but it’s a long road from there. When I came in, it was probably 1-in-10 who might get from Ph.D. graduate to permanent academic position that might eventually end up at Prof. At 2016, I would say it’s getting close to 1-in-20 or 1-in-25, and with the rate we’re producing Ph.D. graduates it will soon be 1-in-50 easily.

Just sit down and do this as a Fermi problem, you’ll see what I mean. Don’t let me stop you if you are seriously talented, committed and hard-working — I just want to be honest about what you’re up against. Before you get sold a Ph.D., make sure you’ve thought seriously about whether it’s worth the time, money and heartache for you in the job market you’ll be entering 5 years later. The unis will happily take your money, but is this right for you?

With that I think I’ll stop. Perhaps there’s a volume 2 coming some day. Oh wait, one bonus tip — #15: Don’t be afraid to quietly sneak into the back of a lecture to ‘preview’ a given lecturer unless it’s a tiny class and you’ll obviously be noticed/disruptive (and remembered). It can be a great way to find out what you’re gonna get in future for yourself. If you’re starting as a first year in Session 1, come have a wander around campus before session starts, find your theatres, find where the food is, perhaps even sneak into a summer lecture to find out what it’s like…😉

Good luck. Happy to take questions in the comments.


* Possible exception is that you are a 99%+ ATAR student in your first year.

†Kudos to Geraint Lewis at U. Sydney who also says this all the time.

‡Probably evident I didn’t go to a ‘top school’. If you did, that’s cool, as a professor I love you as much as the next student and I want you all to succeed to the best of your abilities. My advice to you though is: recognise that you have a starting advantage and that some of your fellow students will use that as motivation to chase you down and beat you. What ever you do, don’t rest on your laurels and get complacent, you will play directly into their hands.




Demographics of Destruction — A bonus analysis

Just one extra piece of data analysis following my post on fixing the ARC Discovery Projects scheme. This little chunk ended up on the cutting room floor last night as I couldn’t fully make sense of it. But after 5 hours broken sleep, and some drawing on the shower window with my finger, I think I can explain it.

The analysis is the two dashed linear fits to a sub-set of the ARC’s data shown below.

ARC Tampered

The fits are to % proportion of all CIs in the 10-25 yrs post-PhD bands for male and female, and while I hate fitting a line to three data points the trend is unmistakable. Let’s try to unpack it a bit.

The rise from 0-5 yrs to 5-10 yrs makes sense — this is the next generation coming through into the fellowship stage — and it will be a large demographic fraction due to the ARC’s (worthwhile) recent focus on ECR support and our perverse use of Ph.D. students as a cheap labour force (for another post). This would then make the peak at 10-15 yrs and subsequent drop off attrition by the ‘game of musical chairs’ that happens first in the transition from DECRA to FT and then from DECRA/FT to tenured junior hire. Going forward, I predict this peak at 10 yrs post-PhD to shoot upwards, with the drop-off becoming shorter and sharper. This will essentially be the ‘Superdoc’ effect recently highlighted in Nature.

What is unusual is that this attrition doesn’t continue right through the dataset — if we’re serious about competition in science, shouldn’t we distill and distill so there’s only a few left at the end? Where are all these 25+ year applications coming from? How real is this thing we see in that graph?

Part of the reason I didn’t include this as an ‘appendix’ to the earlier post, is that I now need to start making assumptions to cover missing data — that earlier post is pure data analysis with no assumptions. The key here is to think about age rather than years post-PhD. Now I’m going to assume Ph.D. completion between 25 and 30, I know people will launch an attack on me about mature Ph.Ds, but if you work inside the system, you know those are typically down at <10% level, so bear with me. If you do this…


…you get a column B like the above. What’s going on here is that 25+ years post-PhD ends up being 50+ age bracket, which is demographically broader than the other bands. We really want to compare apples with apples, so in Rows 10-14 I speculate about what that upper cohort probably really looks like. Retirements should kick in strongly from Row 11, and it’s consistent with many years of just ‘looking’ at the ARC outcomes list. Note that I’ve combined gender here, and have taken a gender-weighted success rate per cohort in order to get accurate numbers.

Let’s get back to graphs…

Model 1

Now that we’ve ‘unpacked’ the 25+ year cohort a bit things look more sensible. The green dashed line is ‘ramp-up’ from ECR programs, the red dashed line is a sensible trend for academic attrition due to the game of musical chairs and people finding other things to do. There’s only one place where the data doesn’t fit the trend and it’s in the 45-60 age bracket — I’ve highlighted it with a yellow triangle and will call it the Matthew zone. If you change the distributions in Rows 10-14 this effect doesn’t vanish, it just reshapes slightly (you need a lot of very old scientists getting grants to make it go away).

The glut of late career scientists is obvious, as is their disproportionately large access to available scientific resources (since that all starts with cash). Note once again, I’m purely using CI in any position statistics here and not lead CI or sole CI statistics. As discussed in my last post, this will only exacerbate massively what we’re seeing in the data I’m presenting.

Another way to see this is:

Model 2

where the blue dashed line is retirement attrition and the pink triangle is what I often call the ‘no-future fellowship’ or the ‘valley of the shadows’.

Probably not a lot more to say here unless the ARC is willing to release some sole CI and lead CI statistics so we can know the full story. I don’t know we’ll ever see that happen.

Otherwise, here’s yet more data pointing to ‘the scientific recession we will have to have’ in Australia (to quote Paul Keating), because the next generation are currently being starved at mid-career at the expense of the scientists near the end of their career.

Fixing ARC Discovery Projects

This is a contentious subject, and I’m probably doing this at some personal political risk, but I think it’s a discussion that must be had, and it only happens if someone is brave enougn to kick it off, so here goes. Before I begin, a disclaimer — I’m happy to be corrected about anything written below, particularly if it might improve the transparency of the system and/or promote mature discussion.

The Problem

The problem, as I see it, is a demographic skew in funding that likely comes from many factors, but is one that, in the current super-tight funding environment, threatens to leave us lots of retiring professors, lots of people at DECRA and Future Fellowship (FT) stage, and a wide gulf in between.

The skew is oft talked about amongst more junior researchers, and often claimed to be bogus by those at the top of the system (or ‘anti-meritocracy negativity’), so let me back it with real stats… As my raw data set, I will take the ARC’s own outcome statistics from today (see below):


The statistics state that there were 10769 participants, combined CIs and PIs, of which 2587 (24%) are female and 8162 (76%) are male.The graph is in terms of CIs, not CIs + PIs, so if you add up the percentages, they should add to less than 100% (and do, see below). This enables you to tease out how many CIs there are if you extract the data from their plot accurately enough. Since I like being precise about these things, I’ve chosen to do this using Datathief III… The results appear below (happy to share actual spreadsheet by direct request):


If you add up the percentages measured from the bar graph (Column C), they only add to 93.3% (Cell C24). The missing 6.7% of 10,769 must be PIs, this turns out to 723. Running the percentages extracted on 10,769 participants (Column D) adds to 10,046, with 10,046 + 723 = 10,769.* Column E is Column D recalculated as a percentage of total CIs (10,046) rather than total participants (10,769) — this is vital to getting meaningful data in Column I (see below). Anyway, now that we know the exact number of CIs, we can just pull out all the numbers of funded CIs using the success in band values extracted from the ARC’s own graph (Column F).

Doing so, Column G is the raw number of successful/funded CIs in each gender/age cohort — note this is CI of any position, lead or otherwise, a factor which we will return to further below. In total there are 1788 funded CIs, of which 1310 (73%) are male and 478 (27%) are female. The overall success rate at CI level (not CI + PI) is 17.8%.

Finally, in Column H I calculate the percentage funded in each cohort relative to the total funded and in Column I then look at how this % funded varies from the % of CIs. If this number is negative, then the success rate in your cohort is lower than the overall success rate; and vice versa if it is positive. Note that the value in Column I adds to net zero, as it should: It represents a measure of ‘success’ somehow displaced from one cohort into another relative to what was submitted.

Numbers are nice, but lets look at this in terms of graphs.


I don’t want to gloss over this, so let’s look at the graphs one by one. The pie charts are percentage of CIs (top) and percentage of CIs funded for each cohort (bottom), starting with young men at North and running around clockwise with increasing age. I’ve highlighted male researcher slices with blue borders and female researcher slices with pink borders (this is naff, I know, please forgive me this one). The big result in these pie plots should be obvious to anyone who works in academia — the male:female ratio is massively skewed. To the ARC’s credit (since I know they do put effort against this), the ratio doesn’t get appreciably worse in the carriage from application to funded grant.

From here, the stats are better viewed as bar graphs, both of which are the same data as the pie charts. Comparing the two bar-plots, the most apparent feature is that the percentage of males with 25+ years PhD is the greatest and it is the most appreciably higher relative to the percentage of CIs. The latter is even more obvious when you plot the difference between the percentage a cohort contributes to the funded CIs and the percentage the same cohort contributes to the CIs applying. As mentioned earlier, a positive value here means your success rate as a cohort is higher than it is for all CIs put together, a negative value means it is worse.

If you look at this graph it conclusively proves what many are complaining about — Younger researchers, both male and female, are actually suffering a lower success rate, in real terms, than older researchers, and the real winners out of this are late career males. Now there’s two important things to bear in mind here that make the story my graphs tell look better than what is the true reality:

1. These stats are for CI in any position only and not for lead CI or sole CI (data unavailable — but see Gaetano Burgio’s excellent article on data-mined lead CI stats for DP16 round for more). I’d love to see a deeper demographic analysis on either of these, but my prediction the truth is that lead CI and sole CI grants will be are overwhelmingly dominated by late career males (see plot from Gaetano’s blog below) — this means they have more cash as they are less likely to share it, and if they do share it, they have more control over it. As such, they will gain accumulated advantage that helps them in the heavily track-record dominated (40%) assessment for this scheme. The ability to be lead CI on two DP projects whilst others have none exacerbates this effect.


2. We are not considering the ‘multiplying’ effect of other funding schemes, such as CoE, LIEF, Laureate, etc. Assuming these have a similar demographic skew, it is highly likely that those with a big advantage via Point 1 above also have more cash in general, further accumulating their competitive advantage in this system. There were several late-career male CIs in today’s results who already hold CoE funding, and now have also got DP money as lead CI to add on top.

But let’s consider the converse for a second now. The younger researchers will have less success as a cohort, probably aren’t sole or lead CI, and so have to get what falls off the table from above. If you average this success rate over time — they are more likely to have stretches without winning ARC funding and, at mid-career level, are less likely to get internal funding as they are too senior to get ECR grants and not senior enough to be politically connected or attract the attention of the upper academic hierarchy and get funds ‘off the top’ or outside announced competitive rounds.

The net result of this is a big problem for Australian science. It is what I like to call the ‘no-Future Fellowship’. It’s what you get after your Future Fellowship when you start your tenured middle- career stage, can’t apply for fellowships any more, and suffer a disproportionately low cohort success rate in the ‘open pool’ contest for Discovery grants (for more, see my other post on grant outcome demographics — and the figure below that comes from it). The net result is, that with much of the spoils preferentially going to the late career males, a gap will form behind them, and when they all retire, that gap is going to mean scientific output in Australia goes backwards. In a sense, we’re engineering our own scientific recession that we will eventually have to have….

Model 2

The problem is now pretty clear I think… so let’s look at:

The solutions

I’d like to now speculate on some ways that we can potentially fix these problems in the Discovery Projects system.

  1. Change the assessment fractions — Currently it’s 40% investigators, 25% project quality and innovation, 20% feasibility and benefit, 15% research environment. In other words, 55% of the assessment comes from criteria where accumulated advantage plays a massive role. I would realign the fractions considerably, making them 65% project quality and innovation, 15% feasibility and benefit, 15% investigators and 5% research environment. I would possibly even toss research environment in the bin, because anything more than a tick-box for whether the project is feasible at the institute proposed is just aiming to skew the assessment in favour of higher-ranked universities (i.e., institutional elitism).In the end, ideas are what really matter in innovation, and the best ones should be supported equally, whether you’re a young researcher with a few papers or a senior professor with a h-index of 1000.
  2. Split the Discovery Projects Scheme into two bands: Discovery Senior and Discovery Junior — There is clearly a need to manage the success rates at cohort level in the data above. One way to do this would be to make the proposal go into a separate scheme if any of the CIs on the proposal are 20+ years post-PhD. Another option would be to do this by number of DPs held within the past 10 years, as soon as this exceeds 3, your proposal goes into a separate pool. Alternatively, one could ‘handicap’ the track-record score for all late career CIs — some would argue that ‘track record relative to opportunity’ should do this, but it’s clear in the data above that this is not working.
  3. Go back to the old system of oz/intreaders and rankings over scores — I’m happy to be corrected, but my understanding of the systems, based on many research office info sessions and corroborated heresay is this. In the old system, the rankings that went to the panel meetings were a complex combination of rankings by different levels of readers, with rankings weighted by how many grants a given reader saw. The benefit of this system is that it removes the bias between one reader and another to a decent extent, and is a little less easy to manipulate by readers who read a small handful of grants.The new system of scores has obvious biases in it. Take two grants, one obviously better than the other. One reader might give them an A and a B as they’re a generous marker. Another might give them a C and D because they’re a hard marker. In a system where scores really count, and aren’t weighted heavily by how many grants you read, those two grants above will suffer very different fates (likely only one of them will be funded). One might ask in a ranking system how you tell an A and B from an A and D if you can only say one is better than the other — well that’s why you have some readers reading a lot of proposals and their rankings having a high weight.I think a lot of researchers who have lived through that shift from the old Ozreader/Intreader days to now will know that the system feels much more random, with your outcomes heavily dependent on your ‘luck’ in getting the right or wrong referee. You can have great comments, and still get nothing in the outcomes. An added advantage of the ranking system above over scores is that it is harder for malicious referees to make soft-kills (i.e., pegging the score down slightly, just enough to spike a competitor’s grant without it being obviously anomalous).Hell, I’d almost say that readers shouldn’t score or rank at all. Leave that to the panel who see enough proposals that they can reliably and meaningfully judge the quality of one relative to another. The readers can make their points via their comments, which should be almost entirely focussed on the project and advice on technical aspects beyond the knowledge of a panel member (and probably would be if we implemented Point 1).
  4. Ban anyone who is a CI in a Centre of Excellence from holding any Discovery grant for the duration of funding to the centre — This one is pretty obvious really. You put a bunch of sharks in a pond full of goldfish, and before long you have lots of hungry sharks and no goldfish.
  5. Make CIs only eligible for holding one DP and not two — This will be a controversial one, but let’s think about it for a second. Each year less than 20% of proposals get funded (this year it was 17.8%). This is not because 80% of them aren’t worth funding, quite the opposite, for the 20% that are funded, there’s probably another 30% that are equally good and only further down due to biases in the scoring system, luck with referees, etc — as everyone knows the distribution of quality in grants has a tall narrow peak and that peak sits under the level where the cash runs out so that only the high-side tail of the peak gets any cash before the budget runs out.If we cut the number of DPs held from two to one per eligible CI would it hurt us that much? Probably not, really. More people with great projects would get funded, and they would be more competitive than they would be in a system where they can’t get money (or get it inconsistently) and others continuously have two grants running, year after year, mostly to do closely allied ideas.On that idea of closely-allied ideas, by funding more people to do only their #1 most innovative project, we actually diversify our funding system into more areas, more viewpoints and more mindsets than we have with some doing their most innovative project and another one they can come up with along side it to bring in money and advance their career. Most of these researchers also teach, and with only one DP, they would have more time available to teach better, improving the strength of the students coming into Australian science and leaving to other countries. Some of these researchers also do outreach, which is under-rewarded given it is essential in convincing the public that they should invest some of their taxes in us doing our technical stuff they can’t understand — with only one DP there’s more time for that too. And finally, there’s more time for researchers to have healthy work-life balance if they aren’t permanently chasing or managing two DP grants. As we all know, healthy balance means more creative thinking, which means more innovation. It would also be significantly more family friendly, which matters a lot to the cohorts that have lower proportional success rate in the graphs above!If, at some point, the ARC budget came back to a level where there was more cash available than worthy projects demanding it (unlikely), then one could always revert to holding more than one grant.
  6. Reduce the amount of paperwork involved in applying for grants — My colleagues overseas can’t believe how long our proposals are. My last one was 100 pages for myself plus 2 PIs. Only 10 pages of it were actual science. This is insanity — it means we waste lots of time writing them, especially when the success rate is 17.8%, and it means many international readers won’t assess them as they take forever to wade through. Bear in mind that this disproportionately affects those who have a lower grant success rate. Those who get grant after grant get money for every time they invest in the forms, whilst those who have to fish for years, do more work — this produces an accumulated productivity advantage that skews the system in favour of those cohorts with a disproportionately high cohort success rate (late career males, inevitably).The ARC needs to have a look at best practice overseas. Rarely have I reviewed a grant that’s more than 20 or 30 pages, even with a half dozen investigators on it. The problem in Australia, in the end, stems from track record being such a massive part of the assessment. It inevitably means a CV arms race, with ever growing detail in the forms as people try to engineer the system in their favour via application policy. In the systems I’ve seen with the shortest grants, it’s more about the idea, and a 2 page CV suffices — in those systems the readers don’t even score the track record, they’re just asked to comment on whether the researchers have the ability to do the research or not. It really should be all that matters in a system valuing innovation: sufficient competence not a giant CV.
  7. Once you reach 20+ years post-PhD your track record is entirely about legacy — A slightly more innovative approach might be to make it such that you have 20 years post-PhD where your track-record is entirely measured by the traditional means — what you produce as published output. After 20 years, that gets completely ignored and it’s all about the quality of the people you produce. This would put the onus on the late career folks to repay their success in past funding with enabling the next generation to do science exchange for some slice of the action. This could be combined with Idea 8 below.
  8. Enable the budget to be weighted by CI even between institutions — A major impediment to collaboration in the DP scheme is that there is a budget that all goes to the lead CI’s host institution. As a collaborating CI, the credit you get at your own institution for a grant with another host institute is near zero — mostly because they don’t see any block funding by you doing so. This provides a disincentive to collaborate. However, if you could split the funding up front, say have a UNSW-ANU collaboration where from scratch 50% goes to UNSW and 50% to ANU (or 40/60 or 80/20 decided by the CIs) then everyone’s happy, and if you need to adjust later, you can transfer funds like happens now.The same could happen with senior CIs under Idea 8. They can come on a grant lead by more junior CIs, with a stipulated percentage specified for them to spend. This would ensure legacy building in the next generation whilst keeping senior researchers alive in the system. It would also prevent bullying by ‘silverback’ lead CIs carrying junior CIs to strengthen their proposal in the track-record arms-race whilst giving them little real control in the research once it gets funded.
  9. Properly qualify ‘opportunity’ in the context of track record in the proposal — If we are going to insist on track record being such a large part of the assessment, then I think we need paperwork sections that enable real opportunity to be properly defined. The key thing that should be declared here is exactly how many tenured staff, postdoctoral staff and Ph.D. students you have working under you. It’s easy to have a massively stellar publication output when you are a senior professor with 4 junior academic hires under your control, a half dozen postdocs, 3 technical staff members provided by your university and a small army of Ph.D. students. If you have 3 Ph.D. students and that’s it, getting even close to the same input out is just completely impossible. Internal funds awarded to your projects should, in principle, be declared also.I’ve heard lots of valiant talk about how track record is always ‘carefully considered relative to opportunity’, and find this mystifying because often the precise information that you need to judge that as a reader is never made available. I’d still argue this problem is best fixed via Item 1 (making track record count much less), but failing that, we need to start doing this properly.

It is now nearly 3am, and I can’t think of any more ideas to round out the 10, but perhaps that’s ok. If you’ve read this far, thanks for paying attention to all this. Improving the depth, breadth and diversity of the scientific community is central to innovation. Having a grant system that is skewed to one cohort and/or largely decided by accumulated advantage destroys this. The data I’ve presented, in my opinion, shows this is clearly a problem in the current ARC Discovery Projects Scheme even before you add on exacerbating influences like certain advantaged cohorts being more likely to be sole or lead CI, hold more than one DP, or concurrently benefit significantly from multi-million dollar Center of Excellence funding.

Fixing this problem is vital to maximising the national innovation potential against available finite resources, and the current government should consider it an urgent problem if they are serious about science and innovation in Australia.

For more reading — see also:

  1. “A Note on the Australian Research Council (ARC) Discovery Program” by Gaetano Burgio.
  2. “Demographics of Destruction — A Bonus Analysis” by myself.


* For full honesty, since I believe in it, the spreadsheet actually gives a total of 10,770 in Cell C28, which is off by 1. This comes about because of rounding issues in Cells D10, D21, D24 and ultimately D28, since I need to deal with x% of 10,769 being a real number, and humans coming as integer units:).

We’ve gotta stop worshipping workaholics…

I’ve been wanting to write about this for a while now, and the perfect opportunity has arisen, so it’s time to let rip. Few can have missed the shocking post in Science a few days ago titled “Getting noticed is half the battle” by Eleftherios Diamandis. What I find most shocking, beyond exploiting his wife and neglecting his kids, is that this is actually being promoted as the gold standard for getting into academia!

It’s going to be hard to beat Bryan Gaensler’s excellent counterpiece “Workaholism isn’t a valid requirement for advancing in science” in the Conversation today, but let me talk to it nonetheless…

As Bryan points out, it’s easy to fall into this trap… I fell into it the same way. People there earlier, people there later, people there on weekends, step up your game to try and keep up, before long all you do is work. I’ve been around this vicious cycle twice now — workaholism really is an addiction in many ways, with recoveries and relapses.

I was probably showing inclinations to being a future workaholic during my Ph.D., I’d say most talented students do. But during that time, I was driving myself out of enthusiasm and interest (good) and not expectation, coercion or ‘the arms race’ of academia (bad). I was massively fortunate to have great supervisors during my Ph.D.: I was left to my own hours and while encouraged to push myself also encouraged to be responsible about taking time out. The only time we put in very long hours was the 4-6 week long blocks when our fridges were running — then we worked from 8am-10pm and on weekends simply because experiments cost us about thousand dollars a day to run. We did these blocks once or twice a year, and when we did, we prepared in advance and we’d take a week or two off after it.

Otherwise, we worked pretty normal hours. During the 3rd year of my Ph.D. the group got a new postdoc from Europe — Heiner Linke, who is now a Professor at Lund University. Because of space issues, new students and me writing up, I moved out of the lab (back in those days we had desks in the lab — there’s pros to this) and shared an office with Heiner — it was a very formative experience in my career… I just couldn’t believe how much someone could get into a ~40 hour week, Heiner would come in about 8 or 9, leave about 5, and get much more productivity out of his day than I did. I was more or less writing around the clock at that stage trying to get finished, and with my 60+ hrs a week I didn’t seem to even be close to getting as much done. I was exhausted and unhappy and struggling; he’d just bounce in, get it all done, and be off for a swim at the beach. It was a real eye-opener for me because I realised how much you can get done by being smart about your day — realising this and making it a reality for your daily life are two different things though, I still don’t think I have it mastered (more on this below).

Things got crazy for me around the time I got my ARC postdoctoral fellowship (DECRA equivalent). I’d slipped into the habit of trying to win the arms race by outworking everyone else. At this stage my good role models were gone, largely replaced by people who did the same. You can operate like this for a while and it works, but you can’t do it forever. Come 2007 I had a continuing position and started lecturing, and I was falling to pieces. I was eating take out and junk all the time, I was always feeling off and I had piled on the weight, I was consuming insane amounts of caffeine to switch on after 6 hours sleep, working all day, then drinking way to much to wind down while working in the evenings. I was persistently grumpy and short tempered, hated my job, and getting little productivity our of myself. I wasn’t efficient or effective any more. I was literally ready to write a resignation letter or throw myself under a bus.

Luckily I saw what was going on and managed to turn it around. For a while I pulled the hours right back, forced myself to get daily exercise and eat healthy (I shed 19kg across the next year), got 8 hours sleep a night, dumped the crazy ‘caffeine-alcohol’ merry-go-round, and focused only on doing the work that was essential. It was either that or I walked away and never came back — I had little to lose at that point. Remarkably, by 2008 I was having one of the most productive streaks of my entire career. The ideas where flowing, I was teaching well, I was doing great outreach (all my YouTube work was in that period). I was fit and happy and going places — come mid 2009 I managed to land one of the first round of Future Fellowships, little did I know this would soon bring it all back down again…

The first few years of the fellowship were great, but by 2012 I was falling into old ways again, mostly under the pressure of achieving what’s expected on a fellowship. It’s quite easy to turn your life around when things are bad — everything is shiny and new and interesting, and feeling better drives you forward. But eventually you reach a plateau, and it’s easy to let that little devil on your shoulder, the one that says ‘oh, but you won’t get your next grant if you don’t get this paper’, to talk you into letting little bits of your healthy regime slip away. Before long, you aren’t sleeping enough, you’re working in the evenings or weekends again, etc. and your edge starts go blunt. It takes more time to get less done and the pressure to beat the competition sees you saying yes to more things you don’t want to do. You lose your creativity and your enjoyment of the job. I got here again at the start of this year, and I’m only just recovering again, mostly by being really strict on myself about living good.

The moral of the story: As Bryan’s post says, there really is an optimum here, and if you push beyond it, you start losing your productivity. You need to be disciplined about being balanced at the point that maximises your effectiveness and efficiency.

But I want to return to a point earlier before finishing this post: How do we fix this problem? I see three parts to this.

The first is that we need to change our role models for productivity in science. We should stop worshipping workaholics like Diamandis, whom I’m sure will regret his choices when he gets to later in life and realises how much he lost to his career. We need to replace them with new role models — people who do manage to be highly productive while having a great life. From my earlier discussion, some of you will say ‘Yeah, but it’s easy as a post-doc to work 9-5 and make a Ph.D. student sharing your office think life is all roses, let’s see him do that as a professor’. Funnily enough, I still collaborate with Heiner and he’s still doing it — he’s director of an institute and usually seen getting on his bicycle and heading home at 5pm. He takes his holidays, all of them, and disappears off to his summer house with his wife (who also has an academic career) and kids — people who work with him know you’ll get no email replies in this time. He runs triathlons in his spare time. It really can be done. I think we need more role models like this, and many of us need to join them and make ourselves the example as well… we should let our students see us turn up at 9am, head home at 5pm, work like dynamos for 8 hours, and have a great life in the rest of our week. They need to see that this balance is actually possible, like I was fortunate enough to see myself as a Ph.D. student.

The second is that we need to start passing this insight down. I said earlier that I don’t think I’ve mastered all the skills yet, and I think it’s mostly because I haven’t been trained in how to do it, I just don’t know all the tricks. All I know is what I’ve picked up by osmosis. In modern academia we do a great job of training our ECRs — everyone seems to love giving a course and mentoring at this level. But it seems that once you reach a certain stage, you’re entirely on your own and no one is teaching you anything any more, lest you become a threat to them competitively. I think intense competition is the enemy throughout academia, but more so than ever at the mid-career level, where you’re often left to sink or swim. It’s actually the most crucial stage for a) preventing workaholism, b) preventing the development of supervisors who bully their underlings into the same workaholic behaviour to extract productivity (e.g., the evil Prof. Erick Carriera), and c) setting up role models who can begin to fix this nightmare. I think the Academies and Universities have a serious role to play here in providing training/mentoring on this issue. I sometimes wish I could be a fly on the office wall with people like Heiner or Bryan or Tanya Monro or others who do manage to pull this off and still have a life. I wish these people were being paid to give talks to mid-career scientists about how to get more done in less time. Science Magazine should be interviewing them, and writing articles about their advice for how to get it done, not selling people like Diamandis.

The third is that we need to start actively shunning the behaviour of people like Diamandis, and more importantly, Carriera. Putting your family second to advance your career should be actively discouraged; the sort of bullying behaviour that Carreira engaged in should see people officially reprimanded or fired. This sort of thing still happens (I was shocked to recently hear about an entire lab resigning due to bullying by the lab head, and this was at an Aussie uni too, not the US where this bullying is more common). We need to put an end to it. Ph.D. & honours students should not be getting told they’re expected to work nights and weekends because it helps your arms race; it’s outrageous and people who do this deserve no respect whatsoever. It is because of these arseholes that the rest of us feel pressured to break ourselves and ruin our lives to compete with them.

Ultimately, something needs to change here or science is going to fail. Young scientists are happy to work hard when they are engaged and interested and they should be encouraged to do so in such a way that they are also happy and enjoying life. If all they see is a life of endless hours and unhappiness, they’ll go do something else, where the pay is much better and the hours are more reasonable.

So I encourage all of you: get your balance right, work to be a role model for the right behaviours, help others to get more out of their day wherever possible. Let’s turn science back into what it should be, the most awesomely fun job around.

Why water + E.Coli = superfluid is too good to be true (or the importance of fact checking for science writers)

I woke this cold, foggy Sydney morning to see a tweet that immediately raised a minor blip in my ‘bullshit detector’ (something all good scientists should be equipped with).

Nature Bacteria Superfluidity tweet

Nature Bacteria Superfluidity tweet

Nice click-bait, so I took a look… The first sentence reads “Swimming bacteria can thin out an ordinary liquid and, in some cases, turn it into a zero-viscosity superfluid, researchers report.” This seemed way too ‘good to be true’, my bullshit detector went directly from blip to full-on claxon mode.

Feeling a bit feisty from the cold, I decided to question it… here was the response:

Twitter debate

Twitter debate with author of Nature News article…

I’ll return to this response later, but the viscosity becoming negative was like waving a red rag at a bull… Superfluids don’t have ‘negative’ viscosity; there’s more to this story than is being sold. So, with a big caffeine hit down the hatch (red bull of course), off I went to the journals to look up the relevant (but sadly paywalled) articles.

It wasn’t long before… “Tell me this is one of your simulations… Alright, flush the bombers, get the subs in launch mode. We are at Defcon 1.”

Here’s my response…

A superfluid is a liquid that has zero viscosity and can therefore exhibit dissipationless flow. This means that one can, in principle, start a flow of the liquid and it will flow forever. The classic example is liquid helium-4, which undergoes a superfluid transition at 2.2 Kelvin. Superfluid helium can do some remarkable things like flow up walls to escape a container or through tiny holes that other fluids can’t get through (a nightmare for fridge-jocks like me but that’s another story).

In this latest experiment, the authors are looking at a fluid that is mostly water, but which contains between 0.1 and 1% by volume a population of live bacteria E. Coli that convert chemical energy (food) into directed swimming motion by rotating structures called flagella. Swimming bacteria are a pretty hot topic right now for many reasons extending from how the microscopic molecular motors that drive flagella rotation assemble and operate, through to how collective motions of large populations of these ‘active swimmers’ show complex structure. This research is at the latter end of the spectrum.

Swimming for bacteria is very different to swimming for us humans because of the massive difference in scale. Bacteria live at a size scale where the stickiness of water molecules, and their relentless jiggling due to thermal motion, changes the way the fluid appears to a swimmer — it seems more like swimming in washing machine filled with hot motor oil than a nice calm lake. The result is that the optimum way to swim is very different. If we made a human-sized bacteria and put it in a swimming pool, it would be like a car stuck in the mud, spinning its flagella (wheels) and going nowhere.

What’s different here is the viscosity, which is a measure for how much resistance a liquid shows to a force that tries to make it flow. If a liquid has a high viscosity you have to put a lot of effort into making it flow. A good example of a high viscosity liquid is tar pitch, which is so viscous that it looks like a solid and takes decades to flow through a funnel under gravity (cue link to one of my favourite experiments of all time). Honey is more viscous than water, both are more viscous than air. At the lowest end of the spectrum is liquid helium where, if you make it cold enough, the viscosity suddenly becomes exactly zero.

Back to the topic, which is the bacteria study. The work was done by Hector Lopez and colleagues at Universite Paris-Sud in France and the idea was to measure the viscosity of water with these swimming bacteria in it. The reason is that the stickiness of water at these scales means not only that it changes the way that the swimmers have to swim, but that the swimmers can in turn change the viscosity of the liquid. The way this works is that if there’s some collective behaviour amongst the swimmers, they can drag the liquid with them, making it look, externally, like it flows differently to how it would if the swimmers weren’t there. The big picture here is to try and use measurements of viscosity as a way to look at patterning and structure in collective swimming behaviour in these bacteria. It’s a clever and interesting way to use physics to attack a biological problem.

To help you all understand the experiment, I’m gonna show you the cool spa in my apartment complex (lucky me, hey)….


My spa, which conveniently looks a lot like a rheometer.

The authors use a device for measuring viscosity called a ‘rheometer’ and it looks a lot like my spa. There’s an outside cylindrical ‘cup’ that holds the liquid and an inside cylindrical ‘bob’, these are concentric. The cup can be rotated either way at a given speed using a motor (which would make my spa pretty cool fun). The bob is connected to a wire that enables a rotational force (torque) to be applied. The idea is, you start the cup rotating. The resistance between the liquid and the cup wall will make the liquid try to flow with it. Eventually this flow will start to rotate the bob with the liquid. You then get the viscosity of the liquid by measuring exactly how much torque you need to apply so that the bob is just at the point where it doesn’t rotate (add less torque it rotates with the liquid, add more torque it counter-rotates — the balance is a little like what you have in the clutch in a manual car). If the liquid has a high viscosity, e.g., tar, this torque to stop the bob rotating is huge. If the liquid is low viscosity, e.g., water, then the torque is lower, and if it’s a superfluid, then the torque is zero.

Pretty easy experiment right. But there’s a really deceptive aspect to it that the authors have exploited to ‘sell’ their paper, and the editors and journalists bought it hook, line and sinker (or should it be hook, link and stinker?). I know this hype originates with the authors and not the editors as the final paper has the same title they used on their submission on the PRL submission date (where you can see the body of the article for free…).

Imagine you now take my spa and put 6 children in it. Kids being kids, they very soon work out that if they run around and around in a circle in the spa they can make the water flow around and around, they can then stop swimming and let the water carry them around the spa (I’ve done this, it’s cool fun!). If you then run around and around in the opposite direction, you can make that flow stop and reverse direction. Of course, if the kids just go in all sorts of silly directions, then nothing much happens. You can imagine that if the kids did this while you were trying to use this spa as a rheometer to measure the liquid viscosity you’ll get some weird results — the liquid might look like it has zero viscosity or even negative viscosity.

The behaviour above is exactly what the researchers are trying to look at, just using bacteria rather than children. If the bacteria swim collectively, then the viscosity will change from that of just water — this is fine, it just depends what you try to say about it as a conclusion, and this is where all professional scientists (and professional editors and professional science writers) know that you need to be very careful or people will call you out, and rightly so, because accuracy is everything in science.

The big question here is: if there’s some bacteria in your liquid that do a collective motion that make your rheometer measurement look like the liquid has zero viscosity, is it fair to call it a ‘superfluid’? I think most physicists would argue that the answer is in fact no. If you don’t give the bacteria ‘food’ then they don’t swim. If they don’t swim, then the viscosity is not zero. So, what you have to do here is pump energy into the system in order to keep the liquid flowing as though it has zero viscosity — but then how is it a dissipationless flow fitting the definition of superfluidity? Well it isn’t, you’re just being deceived — there’s just an agent in your fluid that’s hiding the viscosity. To highlight this, let’s imagine a twist on this experiment for a second…

Imagine that we put our researchers behind a wall where they can’t see their rheometer. What we then do is sneak in and put a very thin perspex cylinder with a radius that’s halfway between that of the cup and the bob into the spa. We now arrange that the cylinder can be rotated such that when the liquid outside it is made to flow by the rotating cup, the cylinder is rotated such that the liquid inside the cylinder either doesn’t flow at all, or flows backwards. The experimenters outside would be blown away, what they’d see is a zero viscosity, or a negative one if the inner flow is backwards. But there’s a dissipation going on that they can’t see, and it originates in our added perspex cylinder. Now for the death punch — the cylinder we have here, it’s just a proxy for the collective motion of the swimming bacteria. As far as the essential fluid dynamics is concerned, they are completely interchangeable.

So in the end, we’re all being deceived by some physicists who are trying to oversell their work. Let’s be clear, I don’t dispute their data or their experiment — the measurements look correct and the data valid, and they should get what looks like zero viscosity or negative viscosity even, which is just an indicator of energy dissipation into the system by the collective action of the swimmers. But to call this a ‘superfluid’ and actively sell it as such, is an absolute howler — this nomenclature is just plain deception intended to extract impact from the publishing system in my opinion.

Now that we all understand the experiment, let’s look at the moral of the story. It comes back to my twitter conversation with Chris Cesare this morning. He says ‘good enough for the editors of PRL and the scientists, good enough for me’ — no, that’s not good enough at all. Your job as a professional science writer and journalist is to not just blindly buy the sales pitch of some authors trying to hype up their work so they get more impact from it than they otherwise might. And you certainly can’t blame your lack of healthy scepticism on the PRL editors, who also bought the line (probably more the title than the paper) and shouldn’t have. Science is a very dangerous game if we don’t apply our own personal filter of rationality over the results, if we simply go ‘the PRL editors think it’s tops, so it must be’ then you’re being reckless with your own credibility.

Chris seems to be a young science writer, so I wouldn’t want to rip him too hard on this one. But he’s a trained physicist and he needs to keep thinking like one. This should be a good lesson about due diligence in science journalism, not just for him, but for aspiring science writers (and journal editors) everywhere. Do your homework, don’t just buy the hype.

Can we fix academia by disentangling the two core businesses of research and teaching?

It’s been a while between posts. I’ll begin with two caveats. 1. I enjoy writing in stream of consciousness; think fast, write fast, let the warts be topics for discussion. Over-refined arguments are conversation killers. 2. I was an agnostic forced through the catholic school system. I took joy in arguing contrary points to extreme lengths just to see how far I could get defending them. A loss was often a win.

If you don’t like challenging your thinking, don’t read this article. If imperfect arguments drive you nuts, don’t read this article.

Cue 2015. Twitter is full of discussions of #ponzidemia and the unsustainability of academia. The ‘anointed’ professors get money, often for stuff they’re not best qualified for; track record is king. Junior academics are on the breadline, they spend all their lives submitting proposals, only to have all or almost all of them rated highly and rejected. Sometimes they appear the next year, authored by ‘anointed’ professors. We have postdocs and superdocs and probably soon megadocs. We have seas of Ph.D. students, who come in ‘bright-eyed and bushy tailed’, and 4 years later, are burnt out and worried if they’ll even have a future after living the breadline on their measly stipend while working 60+ hours a week. Their future: work 60+ hours a week for 15 years, if you’re lucky, you might have a <5% (and falling) chance at some sort of stable position. Don’t have kids, don’t have a life, don’t settle in any given place or it’s probably game over. Or, just leave, sorry, the university thanks you for the business, be sure to return your graduation garments on time.

Enough woe, we all know it, you all get the picture. How the hell can we fix this system?

Probably not by keeping the status quo. So, what’s the alternative? Is there one? Is there a radical solution? Perhaps… but I think it involves breaking the horrible entanglement between the two core-businesses of a university: teaching and research. I see this historical artefact as a cause of many problems in modern academia.

Before I get to a possible solution, I’ll declare two things. First, the other night I read this great article about the value of monopolies — “Competition is for losers” by Peter Thiel in the Wall Street Journal. It says some really interesting things about innovation in competitive and monopoly environments. Second, I also read this great article about HP labs and the future of computing — “Machine Dreams” by Tom Simonite in MIT Technology Review. Both articles made me realise the massive innovative power that can be achieved when you create an environment where you have a bunch of really smart people put together into well structured teams with a common mission and, most importantly, good strong continuous financial and managerial support. This already happens some times; thinking locally, I see this a little in some of the Australian Research Council Centers of Excellence (ARC CoEs), although they are often too small, too focussed and sometimes too closed/narrow by competition and fund limits. Places like CERN and some of the medical institutes that sit outside universities are other examples.

Science is a different game now — once upon a time you could have an academic, a handful of students, and a bit of cash, let them have their own free ideas and get good to great things out. As science has become more ‘pointy’ you now need bigger teams working with more resources over longer timescales to achieve outcomes of real substance. What happens more frequently now is this: you have an academic and a handful of students rushed to graduate by low stipends and a university seeking cash for ‘on time completion’. The academic is flattened by a teaching load and writing a sea of proposals (mostly full of bureaucracy nowadays) — the students often struggle to get their supervisor’s time, let alone get them to spend half a day in the lab with them when it’s needed (often). Now add a <20% success rate of getting at best half the cash they need to do a proper job. The result is running scaled down ideas, often with the truly innovative bits shaved off, because they’re invariably the most resource intensive parts. The outcome is usually a bunch of undercooked heavily-hyped papers, put out because they must be to maintain competitive track records. The papers often report results that are irreproducible, if not because crucial details are omitted to maintain competitive advantage, then because they are flukey one-offs with low yield. There isn’t a lot of serious innovation in this, nor a lot of intellectual enjoyment — everyone is unhappy, fighting for survival rather than doing the brilliant, edgy science they actually dream of.

The problem is ultimately one of resources, not just money but also time. So… here’s a possible solution (one that would require some serious resolve to implement, admittedly, something few governments or organisations have these days…).

We take universities and we strip all of the research out of them, every bit of it. No university does a single scrap of research under this model — they are training organisations pure and simple. The ‘academics’ employed by a university are there for one purpose — to teach undergraduates and teach them really well. Their ‘core business’ is no longer conflicted by any thoughts of doing actual research. Undergraduates don’t need academics who do research, what they need is academics who teach well, who have time to give them to help them learn the subject. More time than academics have in the current system, where they immediately race off after class, or worse, actively dodge students, so they can focus time on writing papers and grants to further something other than their teaching. As far as undergraduates are concerned, you could even say that having academics do research actively harms them because it robs them of the human interaction they need to learn technical subjects with depth.

Before you all start screaming ‘no research, but how could you!’ bear in mind I’m talking about the academics not the students here. Students doing research is an entirely separate issue — they can access research through internships at the research institutes discussed below. They can also see a little of it in well designed undergraduate lab classes, where one can teach the approach, albeit without doing actual research (i.e., generating new knowledge, which rarely happens for undergraduate classes anyway as the leading edge is too far beyond them and it takes too much time). This will mean you might want academics who’ve done research before joining the university, that’s fine, but they shouldn’t be trying to do two jobs at once.

Some will say ‘but how do we attract students?’ If you talk to undergrads, you’ll find they mostly aren’t attracted by research until they get a couple of years in and are indoctrinated by the system. It certainly doesn’t determine what university they choose to attend when leaving high school beyond our system of deciding that the best universities are the ones with the best research. Students choose to go to University X because it’s the best and they know this from the marketing not from an actual rigorous informed personal assessment of the research (try talking to them about this, you’ll see exactly what I mean — there are only a few exceptions). If we ranked the universities instead purely on teaching quality, and built all the marketing around that, then they’d still behave exactly as they do now. But the nice outcome would be that they choose University X for the right reasons: because it’s the best at what they are going there to get from it — an education. Pulling research out of universities removes this confusion about core business.

We then take the research and we move it into research institutes that are entirely separate to the universities, even if they might be conveniently located (e.g., up the road, or across town). The folks working in institutes are all full-time research; if they teach, it’s the rare ‘guest lecture’ at the nearby university, perhaps one or two a year. The research institutes are proper research institutes, in the sense that they have a specific focus encompassing all expertise in that area for a given nation or state (i.e., they are true monopolies). They have a proper management structure from top to bottom, such that research directions are decided from above by people who know what is/isn’t going to be properly innovative and are then fully and properly resourced to achieve a proper outcome. The funding could be pure government (e.g., like an ARC CoE) or pure industry (e.g., Google/HP) or pure philanthropy (e.g., Victor Chang Institute) or some mix but it needs to be at a level that researchers are resourced well and kept continually active — not like they often are now in academia, where they spend more time begging for meagre funds via massive proposals at atrocious success rates than doing anything else.

Employment at the institutes would be at 5 levels. The entry level is interns — these are masters students from nearby universities seeking experience during their studies with payments by stipend like they are now. The largest ranks would be comprised by junior fellows and technical staff. Junior fellows are what we currently view as Ph.D. students (i.e. freshly completed Masters students) but I would abolish the Ph.D. entirely, it is a historical artefact in this system — one that basically amounts to several years of indentured slave labour and payment of a small ransom to have access to jobs in the sector. This would mean paying these people a proper salary right from scratch, which is only fair given their vital role.

Technical staff are as we have now — less focussed on research itself, more focussed on doing the jobs that are essential to research being done efficiently. The second highest tier would be the senior fellows — they are the postdocs and junior academics of today. The top tier would be the scientific management — they are the professors of today’s system. All of these people are paid on a more continuous scale at a value that’s fair to their contribution and career progression — it seems outrageous to me that Ph.D. students get paid 1/7th of what a professor makes; paying them properly also means we can also expect more of them in terms of capability, professionalism and output.

The junior fellows are on 5 year non-continuing contracts, everyone else is on a 10 year renewable contract with review at 5 years — no one in the organisation has a permanent position including the management. Employment levels are set by management to be sustainable such that the organisation is optimally productive, with stipulations on working in teams, minimum resourcing, reallocation of people between projects/teams and management structure. There are also proper targets for workplace diversity and workload management — staff should work hard because they want to when they feel they should, not as a pre-requisite to survival within the system. This would be enabled by changing the way staff are assessed.

Only junior and senior fellows are assessed based on output and outcomes, but they are judged from a team productivity perspective. Comments like “blah isn’t first or last author enough, therefore they contribute nothing” should never be heard again — teams are never 90% two members and a load of passengers, these sorts of attitudes are absolutely destructive to good collaborative and collegial science. Note that since there is no longer a grant system available to these fellows — funding runs down through the management system much like in industry — career assessment is only really needed internally. This means that performance can be more properly judged and managed, and skills beyond ‘how many first author Nature/Science papers can you get for yourself?’ can be properly valued. It enables people to better survive career breaks, be they for professional reasons or family reasons.

Management are instead assessed by ‘360 degree feedback’ weighted say at 70% from junior and senior fellows (anonymous upward assessment) and 30% at peer and above. It focuses entirely on the extent to which a manager enables the teams below them, and the institute as a whole, to be maximally productive. For management, it is much more about creating a legacy at the junior levels and investing in the future than it is about feathering their own nests or driving their own output. Management are there to inspire, enable and encourage, rather than to slave-drive and claim credit to advance their own metrics.

This model would necessarily mean fewer people in research, but not necessarily reduced employment — there are now two separate systems needing staffing: one devoted to research and one to teaching. In both cases the employees no longer have a major conflict of interest — they either do teaching and do it well or they do research and they do it well. They are not trying to do both, and ultimately doing them to less than the level they can because there are only so many hours in a day and so much cash to go around. On both sides, you will have fewer burn-out victims, destroyed by working insane hours trying to do two jobs really well — chose one, do it properly. In both cases, if at the end of their contract they aren’t doing it well enough, then they go do something else (or move from one system to the other). If anything, there should be encouraged turnover, and perhaps even schemes to cleanly, fairly and properly ‘manage’ people out of the system into other careers, e.g., politics, public sector, etc., where intelligent people who can reason well are sorely needed.

This model should also mean less time wasted on intense competition for dwindling resources — as  “Competition is for losers” by Peter Thiel in the Wall Street Journal argues, we deliberately create national/state research monopolies that are resourced to a level where they can properly go after innovative ideas. This is not to say competition is eliminated entirely — it can go on as a contest of ideas inside the institute — but clear decisions on resourcing are then made. This enables innovation directions to be ‘shaken down’ efficiently, with the best ones properly resourced so they result in proper outcomes, not underbaked outcomes due to resource starvation.

Finally, it means that universities are properly competing on their actual core business, which is teaching the next generation advanced ideas, not some other core business, i.e., research, that happens to be entangled into the same institution by history. I can see how a university needed academics to do both teaching and research in the 1700s, 1800s, and even early to mid 1900s, but in the modern era it’s entirely unnecessary. The leading edge of research is generally so far beyond undergraduate studies that it’s no longer essential to have research and teaching in the same place any more.

In the end, what many academics are really tired of is trying to do two jobs, neither of which one can really ever feel like they’re doing to the fullest of their ability. One focuses entirely on teaching in a university and they are a pariah, their ability to rise the ranks is heavily compromised because of this crazy conflict where university rankings are fuelled by research output over teaching quality. One tries to do research, but often with limited time and limited resources, and again it is a nightmare to retain a competitive edge when improperly and inequitably resourced and hit with teaching loads that often vary between individuals (and sometimes used punitively).

If we really want true innovation, we need to do it properly, and perhaps separating these two competing core businesses is the best way… some of you will note that structures like this already exist in many places. Sure. But I often wonder now if it’s how it should be everywhere…