Friday 2 December 2011

Assemblages - a framework for social networks

As I described last month, we are collectively engaged in networks that encompass social systems and populations (macro level) as well as individual action and uniquely occurring interactions (micro level). The problem of scale is how to hold the macro and micro together without reducing the macro to a series of micro epiphenomena or erasing the micro by reducing it to the functions of social forces. As Mark Granovetter observed:
A fundamental weakness of current sociological theory is that it does not relate macro-level interactions to micro-level patterns in any convincing way" (Granovetter 1973: 1360).
Granovetter’s criticism still applies to much of contemporary social and organisational theory. Many of the “solutions” to this problem simply defer the reductionism from the macro to the meso level such as with Anthony Giddens’ theory of structuration (Giddens 1986), the concept of (transformative) praxis (Bhaskar 1997), the notion of the routine within the multi-level perspective (Nelson and Winter 1977) or by different forms of conflation based on act aggregation or agent orchestration (see Archer 1995: 93-134). Network theory might offer a promising alternativeand yet network theory itself possesses the same weakness. The problem of addressing the limitations of existing network theories can be coupled with this requirement to develop a theoretical solution to the problem of scale. It is for this reason that I have built on the concept of the assemblage as a theoretical framework for network theory. While many of the features of assemblages are found in existing network descriptions, unlike existing network theories, the concept of the assemblage was not developed from fragmented theories with different supporting ontological assumptions, but devised with a clear purpose and directed towards a specific problematic within a unified philosophical scheme, though one which is complex and requires a series of steps in order to be fully conceptualised (Deleuze and Guattari 1988: 323-337).

Sunday 20 November 2011

Social Networks and their Limitations

My research interests, as applied to assessing green innovation, technology, clusters, strategic alliances or policy development, etc. is based upon understanding the social networks that afford the relationships on which these new ideas or innovations emerge. This month I’ll introduce a few academic considerations about social networks, while next month I’ll try to explain how I’m trying to develop a corrective to the limitations that social network thinking needs to solve if it is to be an effective explanation of the interdependencies that it describes.

Attempts to describe and explain the social interactions that support organisations and other social phenomena make use of a variety of models and methods. There are, though, a limited number of paradigms that dominate the literature, and among these, emerging as the market leader, is that of the network. Stephen Borgatti and Pacey Foster illustrate the exponential growth of network-based research outputs with bibliometric data (Borgatti and Foster 2003: 992), arguing that the network paradigm forms part of a more general move “away from individualist, essentialist and atomistic explanations toward more relational, contextual and systemic understandings” (Borgatti and Foster 2003: 991). In this moth’s blog I will briefly examine some of the key features of network theory. The network paradigm does not represent a unified approach to research; network models are themselves diverse but share characteristics and assumptions. The use of such models in addressing the issue of innovation (the theme of this paper) seems a sensible choice as networks are able to capture a sense of the interdependencies of organisations and the channels of exchange that enable the relationships necessary for innovation to develop and be maintained (Freeman 1991). Equally, network descriptions can be applied to a variety of innovation-related phenomena. Examples include Powell, Koput and Smith-Doerr (1996), who use network patterns to describe the growth in corporate partnerships and external collaboration and the purpose such relationships serve, while Bengt-Åke Lundvall, with a very different approach to organisational adaptation, uses network descriptions to exemplify the process of knowledge transfer and learning between different firms (see Lundvall 1992).

Monday 24 October 2011

Come off it, REF!

The REF could have an influence on how academics engage with non academics
This post isn't about what is happening on various football fields but on different academic fields. The REF in question (Research Excellence Framework) is in many ways an improvement over the previous Research Assessment Exercise (RAE), with 20% of the assessment based upon the impact of research on non academics. There is, however, a catch: the research not only needs to have a justifiable impact but must also be published in academic journals with an impact factor. Departments noted for their excellent work with the general public, policy makers, business people, practitioners will not be credited by the REF unless the research they develop to help such stakeholders is published in journals used almost exclusively by (a small number of) academics. Why is this a problem? My view is that building a field or developing a discipline is not able to develop in an inclusive way if it follows the Kuhnian approach to research termed “normal science” based on past scientific achievements that the appropriate academic community acknowledges as a foundation for its practice. Kuhn describes these achievements, or “paradigms” as both sufficiently unprecedented to attract a group of adherents away from competing modes of academic research, but, at the same time, sufficiently open-ended to leave various problems for the community of research practitioners to address. Paradigms, in this way, thus help academic communities to demarcate their discipline. They do so, Kuhn argues, by creating avenues of inquiry, helping to formulate research questions, directing the selection of methods appropriate to these questions, defining areas of relevance, structuring the fact gathering process and identifying acceptable technologies appropriate for research. A paradigm also acts to draw in individuals to act as advocates. These advocates and followers are then transformed into a research community, a profession or a discipline as the paradigm becomes accepted and gains credibility. This occurs, Kuhn argues, through the formation of journals, societies or specialist groups, which develop the discipline through articles that are directed to their colleagues who accept the paradigm, rather than needing to justify the concepts, questions, and methods from first principles. This professionalism is supported by the community using its expertise to claim, both for themselves and their paradigm, a place in the academic establishment.

Thursday 1 September 2011

The cultural branding of Climate Change campaigns?

Brands are not just efficient communication tools
While the business application of new modes of branding have been explored in detail, the cultural and communication implications are perhaps even more significant and yet remain an under researched area.  Branding is no longer the simple art of product differentiation using a pleasant logo or clever product name.  Since the development of the mass media, branding has become a key driver in the process of the globalisation of product development and promotion strategy, but also the communication of ideas, cultural icons and social movements.  With the integration of ICT, low cost multimedia and extensive knowledge networks, the way in which branding has traditionally been used to promote products has been revolutionised, but the strategies that have supported the added value that we as consumers attribute to the products/services can now be extended to a range of cultural activities and identities as though they were commodities.  Likewise, the sophistication of consumers has meant that successful branding strategies depend upon the ability of a brand to resonate with complex, and shifting, cultural values.  The merging of these two tendencies has meant that cultural branding, and the branding of culture, are now interdependent forces in the social mediascape in which contemporary life is represented and performed.

Monday 8 August 2011

What is E3MG?

E3MG sounds complicate - it is!
One of the key areas of research that 4CMR is involved with is E3 modelling.  Over the past few years one of the central models, E3MG (Energy-Environment-Economy (E3) Model for the Globe), has been improved as part of a concerted effort by the core research group.  In the near future the improved model will be used to assess policy scenarios related to carbon emissions.  Many of the people who follow this blog will have a vague idea of what such a model is, but this month I will be introducing the model in more detail.  If you are interested in climate change debates but are turned off by the technical details of econometric modelling, then perhaps this month’s blog article is not for you.

OK, so what is E3MG? The model is one of a suite of E3 models maintained and improved by Cambridge Centre for Climate Change Mitigation Research (4CMR) and Cambridge Econometrics under the guidance of Terry Barker.  Each of the models within this suite has a specific territorial focus; E3MG extends coverage across the globe. 

Wednesday 6 July 2011

Emissions Reduction Strategies: How to Choose the Right Policy Mix

Policy makers must ensure that the policy mix does not promote conflicting incentives
The types of policies used to reduce carbon emissions will differ over time as some issues emerge as more powerful problems to be addressed, requiring new solutions.  Policies can be national or regional, or require international cooperation: bilateral, multinational or at the global scale.  Climate change mitigation is clearly a global problem, but one which includes national consideration such as energy, transport, land use and manufacturing policy, and as such, domestic and international policies will be needed to provide incentives for mitigation. 

There are many methods and procedures that form a part of policy but are not legally enforced.  Awareness campaigns, education and information dissemination, social movements, awards, and climate action naming and shaming might each enrol people or organisations to take action to reduce emissions.  Other potentially effective ways of reducing carbon emissions without recourse to legally binding policy include voluntary commitments, schemes or agreements, contracts, and, sanctions for failure to comply with agreed targets. 

While these strategies might play a role in the policy mix, I’ll consider just three types of high impact policy, those aimed in particular at reducing emissions from energy use: Imposed standards, market instruments and financial instruments.

Friday 3 June 2011

Climate science models and the scientific method: experimentation verses deduction?

Is this science and is it valid: What would Thomas Hobbes think?
In Hobbes view, Boyle’s experimental solution to the problem of order was not possible; it was not effective; and it was dangerous. (Shapin and Schaffer 1985: 80-81).
In discussing the methods of climate science, the use of models and the process of simulation are often mentioned as either a key strength or key weakness of the science.  While climate models receive a great deal of attention (and funding), they are generally a very marginal part of the science – much more time is spent examining actual data, identifying relevant data sources, developing statistical methods and making sure that observations and calculations are contextualised in terms of other data sets.  The fact that some statistical methods assessing correlations in actual data is also called modelling is confusion.

By modelling, I will refer here only to simulation using models, a type of experimentation.  I will contrast modelling with the statistical analysis of actual data to determine specific correlations that does not depend on extrapolation or simulation.  While examples of either approach can be contested, the issue I am concerned with here is the validity of simulation and experimentation using models in climate science.  This issue is fundamental if we are to understand the criteria such modelling must meet if it is to follow the scientific method. 

The scientific method seems to be one of the more straightforward issues in climate science - we all know what the scientific method is, right? We agree with the Oxford English Dictionary that it is “a method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses” I assume.  But let's go back to the 17th Century for a moment to understand the emergence of the concept.  The status of experimentation was the source of an important dispute between Robert Boyle and Thomas Hobbes during the 1660s and 1670s.  The disagreement between Boyle and Hobbes is analysed in great detail by Steven Shapin and Simon Schaffer in their book Leviathan and the Air-Pump (Shapin and Schaffer 1985).  The book examines, among many its themes, the boundaries and methods of science in relation to questions of political philosophy.  This is a good place to start because there were two competing approaches to “the” scientific method, best expressed in terms of the relationship between scientist Robert Boyle and the political philosopher Thomas Hobbes, or rather the science and political thought of Boyle and the political thought and science of Hobbes.  Hobbes and Boyle are key figures in the Enlightenment tradition, their approach to method being that of the mechanistic philosophy while their epistemology developed along similar lines within the rationalist tradition, but where they differ is in the status of experimentation itself.

Tuesday 3 May 2011

Holocaust Denial and Climate Change Denial - part 1

Climate Change? - Its all down to sunspots, right
One thing that should be made very clear: any attempt to try to link the views of those who oppose the conventional climate science to Nazism in any form is misguided and unfair. Goodwin's law of Nazi analogies (sometimes stated as follows: as an online discussion grows longer, the probability involving a comparison with the Nazis or Hitler approaches 1) implies that such comparisons are overused and by trying to create guilt by association with the word "denial" is in my view an example of Goodwin's law and its corollary that we should avoid glib association of a point of view with the Nazis. Holocaust denial is often part of a strategy to disassociate the extreme right with the systematic murder of millions of people in order to make an extreme right wing or anti-Semitic agenda more attractive. Holocaust denial historians ignore evidence or rely on fabricated evidence and they imply the existence of huge conspiracy theory on the part of eye witnesses and conventional historians. Those who voice opposition to the conventional views of climate science do so for many motives, as I discuss elsewhere, but however superficially similar the methods of some opponents of the conventional climate science could ever be to those of holocaust deniers, they could never have the same sneer at suffering that exemplifies the nasty agenda underlying their falsification of history.  In short calling the opponents of the conventional view of climate science “deniers” as a smear strategy is wrong.

The reality, though, is that names matter, which is why name calling matters: coining the right name for your opponents, or the object of your opposition is crucial - in the UK the expression "Franken food" still resonates with the general public when considering genetically modified foods, and in the abortion debate, both sides control their image by stating they are pro something - and controlling your own name is an important part of controlling your image, which is why enhanced interrogation techniques are sanctioned, not torture, why politicians only ever misspeak, fudge or are economical with the truth, why the Climate Research Unit email controversy is framed as "Climategate" and why holocaust deniers prefer to be called revisionists. Pejorative names have been coined and applied to those who believe that anthropogenic climate change is a reality, including the climate scientists whose research illustrates a clear link: alarmists, warmists, true believers or team hockystick. The implication is clear: add a context to the opponents' view (alarm, warming rather than climate change, dogmatic, imply the hockystick graph is based on bad science) and extra ( negative) information is coupled to the label. Instead those who hold this view prefer the expressions "the consensus view" or express their beliefs in terms of the scientific consensus in general to indicate that the basic science is settled. The implication of this is also clear - the argument is over, the science says anthropogenic climate change is happening, reduce emissions or face the consequences.

Friday 8 April 2011

Engaging Stakeholders in Climate Change

The relationship between universities and society has changed in recent years.  In most developed economies the university has become a key strand in the triple helix, complementing the added value to the economy provided by business and government elements.  Universities provide training and expertise as well as an infrastructure for high quality research, and thus such research is no longer to be thought of as pure, neutral and uncorrupted by the inconvenience and practicalities of life, if indeed it ever had such higher aspirations. 

In the UK, the Higher Education Funding Council for England (HEFCE) has introduced an impact element for evaluating university research in the next round of assessment in 2014, called the Research Excellence Framework (REF).  Universities will need to show that their research is not just addressing the needs of academics (it needs to do that too!) nor just available to the general public or passively disseminated to “stakeholders” but that the research has informed and impacted business, policy makers and other key decision makers in a decisive way. 

Climate change research has the potential to change the way different organisations conduct themselves, make policy and change behaviour from the individual to the largest corporation and government in the world.  This raises a small number of important questions:
A) What are climate-change researchers/research groups doing to engage with (for lack of a better word) stakeholders?
B) What should climate change research groups be doing to engage with stakeholders?
C) What incentives can be used to make the answer to question A and B the same?

Let’s begin with question B.  Those opposed to doing anything to mitigate possible effects of climate change will say that they should stay out of policy making or producing propaganda for activists.  Those in favour of addressing the problems they perceive to be the key threat in our lifetime, generally favour climate researchers becoming, if not advocates for a programme and activists for its implementation, then working closely with such groups.  The answer to this question is thus political, and will depend as much on ideology as on the responsibility such knowledge implies.

Thursday 17 March 2011

Climate Change for Football (or Soccer) Fans

 Paul Haynes in conversation with James Atkins
Climate change policy is certainly an issue that receives attention – the search term generates more than half a million hits; however, a similar search for “Manchester United” generates nearly 100 million, and they are only 1 of the 92 professional football clubs in England.  This statistic seems to support my prejudice that football supporters (like myself – Wolverhampton Wanderers in case you wondered) are involved with their club in ways that very few issues can match.  If climate change apathy is the problem (and the blog title gives a clue about my view), the question is could we learn anything about the enthusiasm and engagement of football fans.  It turns out that someone else (a Manchester United fan no less) has already thought about this and written a book about football fans, for football fans that also examines climate change policy issues.  The novel, which I’m reading at the moment, is a well observed and comical story about individuals with a passion for football, and another with a passion for addressing climate change, and how they learn from each other.  I caught up with the author, James Atkins, to ask him about the motives for writing his book:
Climate Change for Football Fans is an attempt to talk about climate change policy, a dull subject, in a more palatable way: a chocolate digestive in a world of Lincolns.  In the last ten years working in emissions trading I have thought a lot about environmental problems and climate change and what governments and individuals can do about them. I have also written about this in articles and in my blog The Bustard.  Then a friend suggested that I compile the blog entries into a book in order to expand the readership.  Not wanting to repeat what had already been written I started reading around the topic.

I found that books and reports on climate change policy are an uphill struggle.  Few books on climate change are readable or enjoyable, despite it being an extremely important topic. So I tried to find a way of making the book more entertaining.  This was partly through putting dialogue and humour in it, and partly through introducing the loose parallel of another subject.

The book is a series of conversations between Joe, a Burnley lad who is football mad, and Professor Igor who's obsessed with climate change.

Friday 4 March 2011

Petroleum: A Crude Estimate of the Value of Oil


Last Month I wrote about the need to act on energy consumption far beyond the recommendations of experts and advisors in order to mitigate against climate change effects considerably worse than those anticipated by the advisors that policy makers rely on.  One issue that represents the difficulty, opportunity and necessity of changing the way we consume energy is our use of petroleum, or put another way, our dependence on crude oil.

What is left to say about petroleum that isn’t already obvious? It is a valuable resource that can be converted into many vital commodities very cheaply, not just pharmaceuticals, plastics, fertilisers, solvents and similar products, but thousands of different components of vital technologies and products.  It provides most of the world’s fuel for transport and contributes considerably to the heating and power capacity of our energy supply.  Burning it produces lots of energy, but also CO2 and other undesirable emissions.  It is being used at a rapid rate – best guess is about 84 million barrels a day.  It is a limited – and non renewable – resource, with total oil reserves amount to around 1.2 trillion barrels, with much of these reserves (around sixty percent) in the Middle East, and other reserves concentrated in the former Soviet Union, Venezuela and Africa, leaving only around 20% of oil reserves in countries that the US senate would consider unambiguously stable. 

Another consideration is that while global demand is remains high, easily extractable reserves are declining, and declining rapidly.  High demand, coupled with limited supply, uncertainty over production, a cartel controlling much of the available supply, uncertainty over the existence and viability of extraction, uncertainty over the costs, risks and efficiency of extraction, the growth of a number of emerging economies, institutionalised speculation and changes in the terms of trade between different consumer nations, coupled with greater mobility and more intense global trade has meant that prices are at present both high and volatile.  In short, the price of a vital commodities on which trade and the economy depend, is very difficult to forecast.

In addition to price volatility and limited supply of oil, there are other issues we need to consider in assessing if our relationship with oil is a healthy one. For example, meeting our various climate change agreement targets will require using considerably less fossil fuel; ensuring that the limited landfill space isn’t crammed full of plastics will mean reducing our dependence on plastic packaging; reducing the use of chemicals in the atmosphere is also an opportunity to use fewer petroleum based products.  True, our dependence on oil is deep, but motives for reducing this dependence are strong and multiple. 

This brings me to the subject of climate change, or more specifically to the relationship between climate change and oil.   

Monday 7 February 2011

Tipping Points and the top 1% OR “if you hope for 1° C don’t act for 4°C”

A week or so ago Prof Kevin Anderson, from Tyndall Manchester gave a talk to 4CMR entitled: Climate Change: Going Beyond Dangerous.  The talk assessed the numbers and the models used to determine policy on decarbonising the economy.  The consensus view, he argued, is as follows - an increased global mean temperature of 2°C (on pre-industrial levels) is the point at which climate change was considered a major problem, and the lowest stabilisation point that we could manage though practical and expedient policies. 

Assessing the latest research findings and analysing the data in great detail his conclusion was twofold:

2°C is likely to have dangerous (or extremely dangerous) consequences, and hence 1°C should be the upper limit if we are to avoid dangerous climate change (i.e. the consensus view is too optimistic: the target should be lower)

In practice, 2°C is becoming less likely, with 4°C a more likely stabilisation point if we implement even the more ambitious carbon emissions reduction strategies that are available to us AND assumes:

  • IPCC’s link between cumulative emissions and temperature is broadly correct
  • Non-Annex 1 nations peak emissions by 2025
  • There are rapid reductions in deforestation emissions
  • Food emissions halve from today’s values by 2050
  • No tipping points occur
His conclusion is that 2°C is nearly impossible and that 4°C is likely by 2070 and depending on the effects of various tipping points, there is a chance that stabilisation will be even higher (i.e. the consensus view is too optimistic: the target reduction is unobtainable).

The 2°C target is thus doubly pessimistic, but is there any room for optimism? Well, failure to reach a 1°C target assumes that policies and agreements are directed towards the global population, divided into countries and regions, which leads to the more profound question: If we are to meet a 1°C stabilisation point, how many people need to make the necessary changes to?

Monday 17 January 2011

Being Certain about Climate Change Uncertainty

By Martin Sewell, Senior Research Associate, 4CMR

There are aspects of climate change about which we are almost certain (the physical chemistry), and areas in which uncertainty is rife (e.g. the effect of clouds, the ocean, the response of biological processes, climate change mitigation). My view is that we must explicitly engage with uncertainty, and the best way to do so is using a probability distribution, and the wider the distribution, the greater the uncertainty.

The 18th century philosopher (and economist) David Hume pointed out that ‘even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience’. In other words, one can never generalize beyond one’s data without making subjective assumptions, so science always involves a degree of uncertainty.

What is the best way of communicating uncertainty? In March 1951, the CIA secretly warned US officials that a Soviet attack on Yugoslavia should be considered a ‘serious possibility’. When Sherman Kent, a CIA intelligence analyst, asked his colleagues what probability they attributed to the likelihood of an attack on Yugoslavia in 1951, he was shocked to hear such a wide range of responses that varied from 20% to 80%. In 1964 Kent wrote the seminal Words of Estimative Probability in which he attempted to quantify qualitative judgements and eliminate what he termed ‘weasel’ words. For example, he recommended that ‘probable’ meant 63–87%, and ‘almost certain’ 87–99%. Since then, the BBC and the IPCC have also given serious consideration to how to communicate uncertainty. My view is that we should use probability.