About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Commentary

Causation in a Complex System
by Joseph Rowlands

What is the right way to analyze a complex system? A complex system can have many variables with complex interactions, leading to complex effects that have many different attributes. Some examples of complex systems are economies, biological systems, climate, and child development. Each has an enormous number of factors that can change the results. They also have complex results with many unrelated attributes. So how can they be studied and understood?

One approach is a top-down method. This is the approach used in macroeconomics. It attempts to analyze the relationship between various "inputs" and "outputs" of an economic system, often through a use of statistics and mathematics. So for example, an economist may try to determine the relationship between labor and total production. He can use statistics to determine how total production varies with an increase in labor, while trying to account for any other major input changes.

There are a lot of problems with this approach.

The first problem is that the approach will often need to combine many inputs into a large aggregate input. For instance, the use of labor as a quantifiable value. It's perfectly fine to discuss labor in economics as a kind of category or concept. But to treat it a variable in a mathematical equation, you have to have some method of comparing and measuring it. The obvious way to measure labor is by the number of people. The problem here is that people are not equal in terms of economic output. Some work harder, some are more independent, some are more educated, some are more skilled, etc. Treating labor as if everyone is exactly the same is not realistic. Whatever relationship you try to infer with that assumption must be wrong.

Is there a better way to measure labor so it isn't inaccurate? I don't think so. It's important to note that the way it is measured is related to the relationship you expect. If you're trying to measure total manpower so you can determine the average salary rate, quantity is fine. But if you're trying to measure total economic output, there won't be a good answer.

People have many different qualities that can affect economic output. You can't just pick one and ignore the others. It wouldn't make sense, for instance, to just use education. While education level might be an important factor, there are many other factors. And even education is an aggregate. There are many kinds of education.

What about taking many attributes and giving them a weighted value? This degree is worth this value, these years of experience is worth this much, this degree of hard working is worth so much, etc. But these attributes can't be treated as aggregates either. If a group of people is half educated but not hardworking and half hardworking but not educated, that does not produce the same results as a group that is half educated and hardworking and half uneducated and not hardworking.

And trying to assign weights would be impossible. You could study an economy at any particular time and try to measure each person's attributes and contribution, but any changes in demand or production could see your high producing people become low producing. VCR repairmen could go from providing a valuable service to being unemployable in a short span of time. The only point in trying to measure these statistics is the assumption that their effect on total economic output would be constant.

For those determined to attempt this top-down methodology, these obvious problems are mere distractions. Who cares if people aren't exactly the same as one another? If we pretend that they are, isn't that a close enough approximation? It may even be that while there are differences between individuals, maybe large groups of people always average out. Sure, 10% of a population may be significantly different than the others, but if those ratios hold, we can still measure output in terms of total labor force. So maybe we can just assume the ratios will hold.

The problem with these kinds of assumptions is that they can work fine as long as nothing significant changes. That's one reason why economic models never predict major downturns or other abrupt changes. To justify aggregating data, they have to make assumptions that things will stay the same. And that only works as long as things stay the same.

This creates an interesting problem. If some relationship seems to hold in many cases, but fails in a few, what causes the failure? Is it another factor outside of labor? Or is it a change in the composition of labor? If one population produces more than expect, how is it dealt with? One choice is to assume it must be some other, outside factor. This prevents the initial relationship from being falsified.

A different approach is to recognize that labor varies, and add a new term like Efficiency of Labor that can be multiplied to the labor quantity. But this has its own problem since the value of that efficiency is determined after the fact. It manipulates the data to fit with the conclusions. This also prevents the relationship from being falsified.

The problem of picking good inputs is only one of the major issues. Another is determining the outputs. How do you measure total economic output? Do you measure the total amount of "stuff" that's produced? But a communist regime that makes only size 7 shoes would be as "productive" as a capitalist system that builds shoes for everyone.

Perhaps output can be measured somehow in terms of the value produced. Maybe by measuring the total dollar value of everything made in a year. But there a number of obstacles. First, money is not an independent measure of value. The value of money goes up and down based on factors like what it can by and how much money is circulating.

Even if the value of money was constant, prices are determined by marginal consumers and suppliers. It may be that the non-marginal buyers are thrilled to get a product at that price, and would have been willing to pay for more. So prices don't actually measure the total value. There may be substantial differences hidden from sight because prices only reflect the marginal economic players. A customer may switch from spending $10 on Product A to $10 on Product B, but far from being an equivalent situation, this may be a massive economic gain for that customer. If a new generation of computers costs the same as last generation but performs 50% better, this is not an economic tie.

Again, if you are determined to use the top-down approach, you'll find some way of measuring the economic output even if it is inaccurate. You can measure with money, and try to account for differences in the value of money by picking some commodities as a comparison point. The fact that those commodities don't accurately reflect the total economy can be brushed off by saying close enough. The fact that money prices only reflect marginal buyers and sellers can be ignored. You might reply that while it isn't perfectly accurate, it is a good approximation.

This is another problem, though. Saying that something is close enough or a good approximation implies some degree of accuracy. In order to know whether something is close enough or a good approximation, you'd need to be able to accurately measure it for real and compare your approximation. But if you are using the approximation because there's no actual way to measure the total economic output, how can you say it's close enough? Can you tell how far off you are? Can you tell how much variability there is? No. So what does it mean? It means that this methodology requires you to choose something, and this is the we can think of.

A third problem stems from the fact that the so measured relationships are not causal explanations. They are statistical correlations. This has the usual problem that correlation does not imply causation, but there is another problem.

Many of these relationships hint at a causal relationship, but don't state it in any direct way. It may be reasonable to suspect that 'labor' has an important impact on total production. The hint of a causal connection there is the obvious fact that it is people that produce, and so it may be speculated that more people can produce more. But if you showed through statistics that labor is responsible for 30% of the economic output, for instance, you aren't providing a causal explanation. You are still referring to the vague hint of causation that labor is necessary, but you aren't showing that this particular relationship you have found is causal. There's no attempt to connect the causal principle, that labor is needed for production, to this specific relationship.

These are significant problems. The inputs are inaccurate and intentionally vague. The outputs are inaccurate. And the relationships aren't necessarily causal.

This methodology uses a model-based approach to understanding a complex system. It starts with some assumptions about major inputs and major outputs, and starts gathering statistics. It is known to be a flawed and inaccurate model, but with the potential of some predictive ability. As statistics are gather, constants are assigned to the variables. But as more data is found, those constants may not hold. So the constants become equations. New factors are added. Feedback looks are included. They may find that the productivity of capital changes based on the change in labor, adding derivatives into the equations. The idea is to start with an inaccurate model, and keep modifying it with new data, making it more accurate.

There is an epistemological view associated with this model. In that view, we don't really know anything about reality. Instead, we create a model of what the world does in our heads, and then slowly manipulate the model to more accurate reflect the outcome. It is then argued that we can't really know anything about reality, because our knowledge is always an inaccurate model that is just waiting for the next correction. The model works fine until the unexpected happens.

In contrast to that epistemological view, there is a view that says we actually do understand things about reality. We actually do understand and can identify causal relationships and the nature of things. We obviously don't know everything, but new knowledge does not simply invalidate previous knowledge. The previously known principles may still be true, and the discovery of a new principle just adds to our knowledge.

This second epistemological view also has a parallel methodological approach to complex systems. Instead of starting with an inaccurate model and refining down, you can start with less complex behaviors and build up.

In economics, this is the method of microeconomics. It starts with simple events, like a person confronted with a choice between two products where one is preferred more than the other. It grounds each conclusion in terms of causality. And it builds up from there. If one person acts this way, what happens when a group of people, with potentially different preferences, all compete for the same product.

As causal relationships are discovered, more and more complex situations can be understood as a combination of the various causes. Instead of abstracting the details away, the more complex situation is understood in terms of the individual causal agents and causal principles.

When new principles are discovered, it doesn't invalidate the previous ones. It adds new insights and an even larger assortment of scenarios that can be analyzed and understood. The principles of inflation don't change the principles of supply and demand, but together they can explain a larger set of situations.

One way that is commonly suggested for differentiating microeconomics from macroeconomics is that macroeconomics attempts to understand the whole of the economy, while microeconomics only deals with subsets. The problem with this approach is that microeconomic theory doesn't just apply in narrow situations. The principles hold over the entire economy.

A better way to understand it, though, is in terms of methodology. Micro starts with simple interactions and builds up while macro starts looking at the whole and tries to work down. That means micro theory can provide explanations and descriptions regarding the entire economy. Inflation can be understood in terms of its effects on individual decision-making. Same with trade surplus, monetary policy, boom and bust cycles, etc.

It's not clear if the same applies to macroeconomics. Can it eventually get so accurate that it can explain individual actions or micro-events? Even if it tried, it would probably fail as it attempts to use averages and aggregations, which wouldn't work when individual variability is added. This suggests that its methodology may only 'work' in complex systems where aggregation is possible.

Part of the conflict between microeconomics and macroeconomics is the fact that unlike the physical sciences, human behavior is not simple mechanistic behavior that can be measured once and applied to all. People have wildly different values, skills and responses to change. Microeconomics has to recognize this issue and properly accept that economics should be a qualitative science. But those hoping for a quantitative science will have to look towards macroeconomics, where averages might have the kind of quantitative predictability that individuals don't. If this is seen as a requirement, the macroeconomic approach is necessary.

This is an excellent example of how values seen as important in a science can alter the substance of it. Microeconomics, as embodied by the Austrian school, values explanation via causal relationships, and accepts that quantitative predictions are not possible in this field. The macroeconomic approach values quantitative predictions so much that it is willing to focus on correlation and ignore causation, except as a possible hint at whey the correlation might be true.

Clearly I find the macro approach to be fundamentally flawed for all of the reasons cited and more. The epistemological mirror to this approach is deeply flawed, viewing knowledge as an inherently inaccurate model that approaches accuracy but never arrives. The micro approach is rooted in true causality and seeks to understand complex systems by understanding how less complex components interact causally. Instead of taking a hazy picture and trying to twist and contort it until it produces better predictions, the micro approach provides a clearer picture based on the known principles and interactions.
Sanctions: 36Sanctions: 36Sanctions: 36Sanctions: 36 Sanction this ArticleEditMark as your favorite article

Discuss this Article (21 messages)