Recently an article by Gudula Walterskirchen in the Austrian daily Die Presse about evidence based policy making (EBPM) caught my eye.

EBPM has been kicking around since the 1980s. Essentially it is the idea that policy decisions should be based on comprehensive, meticulously established, objective evidence as opposed to intuition, ideology, or common sense. It has a number of proponents and, at face value, it makes sense – more data, informed decisions, better policies.

Walterskirchen examines how, in an EBPM approach, the Austria federal government at the beginning of the COVID 19 epidemic commissioned mathematicians to construct projection models for virus spread and subsequent deaths.

Based on the projections, Chancellor Kurz warned that without harsh measures 100,000 Austrians would die. We went into lockdown. When the mathematicians saw how their data was being used, they attempted to explain the complexity of their models and stressed that Kurz’s statement was an oversimplification.

Politics, however, needs simple messages. As Dr. Paul Cairney argues, EBPM is flawed. Because of an overabundance of information, policy makers can’t and don’t consider all the evidence when adopting policies and resort to shortcuts. What are your thoughts? Can EBPM work? Can informed policy makers actually make better decisions?

Image by Steve Buissinne from Pixabay.

3 thoughts on “Can EBPM work?

  1. This is an interesting idea! I think on the surface it absolutely makes sense to examine and create policies with as little bias as possible, and it is therefore very important that we have clear facts and data to inform this. However, the entire notion of policy-making relies on the existence of some collectively agreed-upon moral framework. Accurate data is always important, but the further implications of relying solely on objective data are quite dangerous. A government needs to have some “common sense” goals to begin with (in this case, minimizing deaths as a result of the virus), and the logic behind EBPM could seemingly be used to justify “immoral” policies just as well as “successful” ones. There’s certainly a delicate balance to be found, especially considering that like you said, policy makers are human and can never make such robotic decisions as those posited by EBPM.

    1. Thanks for your comment. It is interesting that you equate EBPM with robotic decisions. It is not a connection I would have drawn. When I think about EBPM, I envision an unattainable rationalist utopia – but one clearly comprised of human (homo sapiens of all genders without excessive body modifications – pacemakers, prostheses, etc. yes / chips in the brain, no – what it means to be human is clearly the topic for another blog) policy makers.

      When considering such a body of policy makers I question rather informed rational decisions are better/have greater efficacy/are more moral/… than intuitional decisions. This questioning often gets me into heated discussions with my fellow philosophy students. Adding “robotic” or AI decisions into the mix opens up a whole new perspective. Here, I would argue that AI in some situations clearly would make better decisions – for example controlling merging traffic.

      1. That’s definitely true! I agree that in many cases, those “robotic” policy makers would do a good job at enforcing such things, deciding the best course of action during a pandemic, etc. I think it’s also important that humans and human “intuition” is involved in designating the moral basis around which those policies revolve. We need basic moral guidelines like “keep as many people alive as possible” or “don’t infringe on individual rights” (and defining what those rights are for people, etc.) in order for “robotically” decided policies to accomplish what we want. What I mostly worry about with EBPM is where it draws the line between those sets of policies. I’m sure that if put into practice it could be helpful/efficient in situations like a pandemic, I just think we ought to be cautious about how low-level those policies can delve. Complete neglect of human “intuition” could lead to the very problems people tend to be afraid of at mention of AI–that such robotically made decisions would prioritize some lives over others, would turn on human morals as a whole, etc. It’s certainly a far-fetched extension of what EBMP seeks to do, and I absolutely agree that in some situations AI would do a much job at enforcing policy. This is really interesting to think about!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s