P vs NP

Is P = NP?

Computer Scientists and Mathematicians have puzzled about the answer to this problem for many years. To most of them, it represents nothing more than an abstract concept akin to a puzzle. To the realists, however, the solution to this question is much more than just that. Many have prophesied that the answer is probably a resounding ‘No’. Things start getting interesting, however, when you begin to think about the repercussions of obtaining a positive answer to this infamous mathematical puzzle. In order to really understand what this means, let us begin by looking at which each of the terms (a.k.a complexity classes) in the equation means. Before we get to that, however, we are required to state a few more definitions.

Decision Problems

These are problems for which the answer is either a Yes or a No. Easy enough to understand, right? What are some examples of these? Well, these can really be anything. For instance, the question of ‘Did the chicken come before the egg?’ is absolutely valid in this context and if it could be modeled in a mathematical framework, this would be a very important question indeed. For now though, let us focus on more ‘math-y’ topics. An example of a question pertaining to such a topic would be ‘Is (0,0) a solution to the equation x_1 + x_2 = 0?’. The answer to this question, of course, is yes.

Polynomial Time Solutions

Let us first state here that ‘Time’ is a little more abstract in comparison to what you and I deal with in the real world. The ‘Time’ of an algorithm here is primarily a measure that looks at the number of ‘steps’ involved in performing the algorithm. For instance, consider the following setup: You are given a set A of two possible elements, i.e., A = {(0,0),(0,1)}. The decision problem is ‘Does there exist an element in A that solves the equation x_1 + x_2 = 0?’. The way to answer this question is to try out each element in A and see if any of them solves the given equation. Now, in the worst case, if we were forced to go through every single element in A, we would be forced to try out two options before ascertaining that (0,0) does indeed solve the equation and that the answer to our decision problem is ‘Yes’. Here’s the thing though, if you were to consider the process of plugging in an element and viewing the output as one single ‘step’, then the worst case number of ‘steps’ is 2, since we are required to check two elements in order to answer the question.

Let’s step the game up a little. What if A now had three elements instead of two? How many steps would we need now, in the worst case? You could try it yourself or you could take my word for it when I say you’d need 3 steps. You can clearly see that the increase in the number of steps is linear and as we all know, a linear function is merely a special case of a polynomial function. I could very easily conceive of an example where the number of steps increases in a quadratic manner with an increase in the number of elements. Essentially, any question that involves a polynomial increase in the number of steps in order to determine an answer is a question that is answerable in polynomial time. It must be noted that with modern computing breaking all barriers, polynomial time algorithms are fairly quick to run. Our problem arises when we’re dealing with exponential time algorithms. Essentially, what this means is that as the number of elements increases, the number of steps involved in arriving at an answer increases in an exponential manner. This is bad, since exponential problems can be extremely hard to answer.

P

The letter \textbf{P} here stands for \textit{Polynomial}. Essentially, this is a complexity class that represents all those decision problems that can be solved in Polynomial time. As we stated earlier, these are the ‘easy’ problems that do not pose much of a hassle to answer. An example of a problem belonging to this complexity class was provided earlier when we posed a decision problem to determine if an element in a set satisfied a given equation.

NP

The acronym \textbf{NP} is often confusingly thought to stand for Non-Polynomial. However, this is incorrect and \textbf{NP} actually stands for Nondeterministic Polynomial. Let us provide an explanation for what problems in this complexity class are all about. It is often thought that decision problems in this complexity class can only be answered in non-polynomial (or exponential) time. Before moving on, let us define what a ‘certificate’ is, in our context. A ‘certificate’ is essentially a ‘guess’ for an answer. Here’s an example.

Consider the equation x_0 + x_1 + ... + x_n = 7. Assume that the decision problem is ‘Does there exist a solution to this equation?’. It is clear that this cannot be answered in polynomial time. However, let us assume that we are given a certificate or a ‘guess’. An example of this is one possible set of values for the vector x, such as (0,1,0,...,1). Now, given this certificate that satisfies the equation, we can answer the question with ease, since we now know that there exists a solution, since our certificate was a solution.

This, in essence, is what problems in the \textbf{NP} class are all about. If one could answer the decision problem with ease, given a good guess, then the decision problem is said to be ‘in \textbf{NP}‘. You might think to yourself, ‘Well, can’t any decision problem be answered with ease once you’re given the right guess?’. The thing is, though, a decision problem can always be formulated such that despite the right guess, one can’t come up with an answer easily. Consider the question ‘Does there not exist a solution to this problem?’. Take a moment and think about this. Even if one were given a good guess for a solution that doesn’t solve the equation, we can’t be sure that there isn’t some other set of values for x that will solve the equation. As a result, we are forced to run through all possible values that the vector of textbf{x} can take. This, as you might have figured out, takes an exponential amount of time, since in the worst case, we are required to run through 2^n possibilities for the vector \textbf{x}. This decision problem is hence, not in \textbf{NP}.

Implications of P = NP

So now you know what \textbf{P} and \textbf{NP} are. We did mention earlier that the question of P = NP is significant not only to pure mathematicians or computer scientists. In fact, the answer to this question could have grave consequences with regards to the functioning of every single security system on the planet. Think of a simple 128 bit encryption key. To put it in simple terms, this is a password system of 128 bits where each bit can be a value of 0 or 1. The whole point of this being a good security system is due to the fact that in order to crack the password, in the worst case, one would have to run through 2^{128} (an exponential number of) options. So far, we are safe, since this is hard to do and as we construct tougher encryption keys, it becomes harder and harder to crack as the exponential function keeps getting bigger and bigger.The problem of cracking the password is, thus, a problem in the complexity class of \textbf{NP}.

Let us now assume, hypothetically, that all problems in the complexity class of NP also belong to the complexity class of P (this is essentially what P=NP means). This would mean that any password that was earlier tough and nearly impossible to crack now becomes vulnerable, since the cracking of the passwords could be done in polynomial time. As we already stated earlier, with modern computing the way it is, a polynomial time algorithm is pretty quick and easy to perform.

I’m sure you’ve understood the magnitude of what would happen if P = NP were true. Every security system would be obsolete. There would be no point in having banks. There would be no point in having classified government data. Literally every electronic system in place in the world would break down and we would effectively be thrown back into the stone age. Pretty scary, right? Of course, the fact does remain that no one has managed to prove that P = NP is true. Many feel that it is almost certainly not true and this helps the world maintain peace and order.

Advertisements

Queues: They’re really not that bad

Queueing theory (more commonly known as the science behind waiting in lines) is one of those things that affects us every single day even without us realizing it. Like gravity, it is in effect every single moment and is modifying our surroundings little by little. Now, before you start to think that this is going to be a long article about queues and the math behind it, I’d like to state that this article is simply going to address one small (yet significantly interesting) application of Queueing theory by Disney.

Before that, however, I’m going to provide a historical outline of the science behind this fascinating concept. Over a 100 years ago, telephone operators often faced a tough time in determining how long it would take a caller to remain on hold and how to calculate the capacity of centralized switchboards. It turned out that the number of telephone subscribers was becoming too many for the telephone operators to handle and they weren’t able to accurately place the calls every time. This is where Erlang came in (he published his first paper on this in 1909). He created mathematical models to describe the Copenhagen Telephone Exchange by estimating wait times, number of callers in queue, number of operators required and other such metrics. The rest is history as Queueing theory began finding use in a plethora of real -world applications till date.

I was made aware of the system put in place by Disney when I was going through an INFORMS article [1]. The general idea is that the company tries to improve the individual visitors experience by making use of forecasting and analytic tools. One such popular tool is Disney’s FAST-PASS system which is a unique queuing forecaster that provides visitors with a certain small time window within which they can arrive at a ride in order to skip waiting in line. While the actual implementation of this would be much more complex, the nutshell idea here is that once you analyze the locations and patterns of all of the individuals at the amusement park as one large mass, it is easy enough (in some sense) to tell when the line at a particular ride or event would free up. Since this could obviously vary depending a number of factors, the FAST-PASS system provides an interval within which, with a high degree of confidence, the queue times would be low. A central command center executes these ‘magical’ forecasting tools ever 5-10 minutes in order to revise the predictions and estimates in order to provide the customers with a seamless and enjoyable experience. Without doubt, a happier crowd is one that has to wait in line for the least amount of time. But even Disney, with all of it’s Operations Research skill set will be unable to bring the wait time to zero. This is one of their newest innovations comes in – Interactive Queues. Prior to entering the rides, the visitors will be able to have a lot of fun even during waiting by being able to interact with smaller, yet powerful, entertainment packages.

Disney, as a company, understands the importance of squeezing out every bit of enjoyment possible and making it available to the visitors and this is one of the reasons that it is much much more than merely an amusement park. As far as the Operations Research world is concerned, Disney is a goldmine of data and predictive analysis that provides a bit of fun, entertainment and amusement as a side activity.

[1] https://www.informs.org/content/download/258057/2434897/file/Roundtable-revised_ORMS3902_April2012.pdf

Language and Social Networks

Everyone speaks a language. Some people speak two and yet others speak multiple languages. Yet, the one common aspect that stands out amongst all of these languages is the fact that they are all extremely different from one another. This, understandably, leads us to ask the question, ‘How did language evolve?’

Let’s consider English. The evolution of English takes root with the Anglo-Saxons (well, kind of). The language we now call English is actually a blend of many languages. Even the original Anglo-Saxon was already a blend of the dialects of west Germanic tribes living along the North Sea coast. Later, in the 800s, the Northmen (Vikings) came to England, mostly from Denmark, and settled in with the Anglo-Saxons. They were followed by William the Conqueror and his Norman supports who invaded England sometime in the 11th century. These men, having originally been from normandy, spoke a norman dialect of french and used this for their day-to-day interactions.

English since then has been absorbing vocabulary from a huge number of sources.  French, the language of diplomacy for Europe for centuries, Latin, the language of the church, and Greek, the language of philosophy and science, contributed many words, especially the more “educated” ones.  Culturally specific words from other regions of Europe have also been added. As time passed on, there was the incorporation of Asian languages which began with the emergence of the asian provinces as world powers. Not to mention the incredible number of new words that are created all the time in order to accommodate a new object, emotion, or any tangible and/or intangible aspect of life.

The recurrent theme during the course of this evolution has always remained the same. Some form of interaction between two cultures leads to a gradual modification of a certain language. As these interactions build over the course of time, so too does language change radically.

In the field of sociolinguisticssocial network is a term used to describe the structure of a particular speech community. Social networks are composed of a “web of ties” (Lesley Milroy) between individuals, and the structure of a network will vary depending on the types of connections it is composed of. Social network theory (as used by sociolinguists) posits that social networks, and the interactions between members within the networks, are a driving force behind language change

Social network interactions are really the driving force behind all of humanity. We exist because we interact with our neighbors and this holds true for even something as intangible as Language. Modeling the spread of language is, in many ways, quite similar to modeling the spread of ‘cultural information’ through a network. It can be argued that the existence and/or nature of a language is primarily defined by the existence and/or nature of a specific culture.

 

Observe the network described above which shows different cultures and the links describing the interactions taking place between the cultures. If you can visualize the node ‘F’ as the aggressor looking to expand it’s reach, pretty soon, it will have occupied either of the blue nodes or the green nodes. Let’s assume that F decides to ‘invade’ SP. A cultural war of sorts ensues, the outcome of which reflects which culture was more dominant during that period. A similar ‘game’ is played between all of the nodes that interact with each other. Based on which one is more dominant and influential than the other, the resulting culture that  emerges reflects an amalgamation of the two previous cultures. As a result, the language that evolves is the result of the amalgamation of the two parent languages.

Interesting how this happens, isn’t it? Given a certain graph of interactions, we merely observe how the graph would change if a certain node were to attempt to press into a different node’s territory. This also leads to an interesting observation. As long as all the nodes are suitably well connected and there exists one dominant culture, that culture is going to end up influencing every other node in the network. This is possibly what happened when the British decided to try and ‘take over the world’. English is now a dominant force in today’s world it’s influence is felt in almost every corner.

The thing to be noted here, however, is that this evolution takes place over a period of a number of years and it almost impossible to set up an experiment to study this evolutionary process. The challenge thus, to accurately model the dynamics of the spread of cultural influence remains great.

References

[1] http://webspace.ship.edu/cgboer/evolenglish.html

[2] http://en.wikipedia.org/wiki/Social_Network_(sociolinguistics)

NSA and Phone Call Records

9/11 was a tragedy unlike any that our generation, and possibly many others, has witnessed. One of the most immediate reactions to such an attack would be a mad rush to discover the perpetrators in order to bring them to justice as quickly as possible. But as you can very well imagine, when you’re dealing with a terrorist network that was operated so covertly that they managed to hijack multiple planes and almost ram one of them into the White House, it’s going to take quite some doing to uncover theses hidden fiends. And yet, barely within a day of the tragic attacks, the leadership structure of the terrorist network made it’s way to newspapers. It’s not like the terrorists made their organizational hierarchy public. How then, did the investigators figure it out?

Enter Social Network Analysis [1]. While we cannot know how the investigators actually uncovered the hierarchical structure, (because it’s all probably classified) we can get an idea from an article [2] by network scientist Valdis Krebs who pieced together links between the hijackers from news reports and calculating certain measures of social influence [3]:

Degree Centrality which refers to the number of links each hijacker had to the rest of the network, Betweenness Centrality which looks at their location in the network relative to other members, Closeness Centrality which looks at the average social distance between a particular member and all of the other members of the network. These measures are just some of the common metrics employed while analyzing a Social Network.

After observing several summaries of data in various newspapers, Krebs began mapping the network. At certain points, he had to be cautious because of a lot of incorrect data being published in the press. However, as and when new data emerged, Krebs and his team cautiously added a new node and edge to the network. Eventually, he managed to get a good picture about the most influential people in the network and so on. Do consider reading through his entire analysis [2]

While all this is quite impressive, there is still a big problem that needs to be addressed. As Krebs himself mentions, all of this kind of research is used in ‘prosecution, rather than prevention’. The real need of the hour is to ensure that attacks like these are prevented.

And this is precisely where data from cell phone companies comes in. An analysis of communication records provides investigators with a good sense of of who’s plotting with who and against who. Consider a simple concept of triadic closure. This essentially means that if two people have a mutual friend, it is highly likely that those two people will end up becoming friends as well. Therefore, if you had this concept of triadic closure in mind and had full access to communication records, you could look for contacts the two suspects have in common and start to build out a map of the conspiracy. Using very simple concepts such as these, you could begin to map out the entire network, progressing from link to link. Once this is done, you begin to mathematically analyze it; studying the forces that might drive the network formation, forces that lead to the network to form in a particular way, the location of the formation of the networks, the location towards which the network could possibly be forming (thus giving you an idea, perhaps, of where the attack could be taking place), who is the most influential person in the network and many such insights. Once this is done, it is only a matter of issuing surveillance warrants and to tap the phones.

Now, this is where the problem starts. There are two conflicting schools of thought regarding the collection of cell phone data. One group believes that collecting all possible data could lead to a lot of useless information that could cloud the investigator’s ability to determine the terrorist structure with high accuracy. On the flip-side, insufficient data is a problem because of, well, insufficient data! Also, lets not forget the millions of people who believe that the government is prying on them by collecting such cell phone data. It is extremely hard to get the common man to understand why this sort of data collection is crucial to national security. If such a system had been in place before 9/11, chances are that the attack could have been averted.

The point of this article though, is that Social network Analysis has a very important role to play in national security. The advances in this field in this direction, therefore, will be quite interesting over the next few years.

References

[1] http://www.digitaltonto.com/2013/how-the-nsa-uses-social-network-analysis-to-map-terrorist-networks/

[2] http://firstmonday.org/ojs/index.php/fm/article/view/941/863

[3] http://en.wikipedia.org/wiki/Centrality

Early Birds of Graph Theory

If atoms are the building blocks of all life in the universe, concepts in Graph Theory are probably the building blocks of all of network research (maybe not. I’m probably missing a few important blocks here. But you get the point.) Graph Theory, used extensively in the fields of computer science and mathematics, is essentially the study of graphs. which are structures used tocomplex_network_k6_complete_graph model the relationships between objects.

Here’s a picture of a graph on the left (just to make sure we’re on the same plane of understanding as we move forward).

This post is going to be about two of the earliest applications of Graph Theory. They’re both quite fascinating and, truth be told, probably ahead of their time. Let’s check them out.

a) Koningsberg Bridges

The Seven Bridges of Koningsberg is a historical and landmark problem in the field of Mathematics and paved the way for much of Graph Theory that followed.

In the early 18th century, the citizens of Konigsberg spent their days walking on the intricate arrangement of bridges across the waters of the Pregel (Pregolya) River, which surrounded two central landmasses connected by a bridge. Additionally, the first landmass (an island) was connected by two bridges to the lower bank of the Pregel and also by two bridges to the upper bank, while the other landmass (which split the Pregel into two branches) was connected to the lower bank by one bridge and to the upper bank by one bridge, for a total of seven bridges. According to folklore, the question arose of whether a citizen could take a walk through the town in such a way that each bridge would be crossed exactly once. The picture below describes the situation as presented.

 

I’m still a little unclear as to how exactly this problem arose. Interestingly, Euler, who provided the solution to this problem (that it does not exist), was not very thrilled at having made any sort of a mathematical breakthrough. In his own words, in a letter (the english translation) he wrote to a friend who had been corresponding with him about the problem,

Thus you see, most noble Sir, how this type of solution bears little relationship to mathematics, and 1 do not understand why you expect a mathematician to produce it, rather than anyone else, for the solution is based on reason alone. and its discovery does not depend on any mathematical principle. Because of this, 1 do not know why even questions which bear so little relationship to mathematics are solvcd more quickly by mathematicians than by others

Whatever be Euler’s attitude though, it cannot be questioned that his solution to this problem has gone down in History books as the stepping stone to modern Graph Theory. Let’s take a look at his reasoning behind the solution.

Euler very quickly pointed out that the only item of relevance in this problem is the sequence of islands visited and the routes taken to achieve this sequence. Everything else, including the nature of the land masses being connected and the nature of the bridges, is quite irrelevant. From here, Euler proceeded to develop a ‘Graph’ structure to represent to sequence of possible routes.

The graph formulated here is a figure consisting of points (called vertices–the plural of vertex) and connecting lines or curves (called edges).  The problem of the bridges of Konigsberg was thus reformulated as whether this graph can be traced without tracing any edge more than once. Now, for each of the
vertices of a graph, the order of the vertex is the number of edges at that vertex.  The figure below shows the graph of the Königsberg bridge problem, with the orders of the vertices labeled.

Screen Shot 2014-03-28 at 11.41.06 AM

Euler’s solution to the problem of the Konigsberg bridges involved the observation that when a vertex is “visited” in the middle of the process of tracing a graph (walking through the routes), there must be an edge coming into the vertex, and another edge leaving it; and so the order of the vertex must be an even number.  This must be true for all but (at the most) two vertices –the one you start at, and the one you end at, and so a connected graph is traversible if and only if it has at most two vertices of odd order.  Now a quick look at the graph above shows that there are more than two vertices of odd order, and so the graph cannot be traced. The implication? The desired walking of the Koningsberg bridges is impossible.

Euler’s work was subsequently presented to the St. Petersburg Academy on August 26, 1735, and published as Solutio problematis ad geometriam situs pertinentis (The solution of a problem relating to the geometry of position) in the journal Commentarii academiae scientiarum Petropolitanaein 1741.

As for the bridges, however, tragedy befell. After World War II, Konigsberg became part of the Soviet Union and acquired its new name. During the war, four of the eight bridges of Königsberg were damaged. Today, only a faint memory of the bridges exists for those who seek inspiration from the first ever theorem in Graph Theory.

b) The Icosian Game 

The icosian game is a mathematical game invented in 1857 by William Rowan Hamilton. The game’s object is finding a Hamiltonian cycle along the edges of a dodecahedron such that every vertex is visited a single time, and the ending point is the same as the starting point.

Right. I probably lost most of you in that pragraph. Let me put it down in simpler terms. Take a look at the picture of a Dodecahedron below.

2000px-dodecahedron_light_blue-svg

It’s a pretty cool structure that has 12 pentagonal faces, where three faces meet at a vertex. Ignoring the scales of the distances and pushing the 3-D picture down into 2-D, we get something like this

The game essentially goes like this. You need to start at a node (vertex which can be imagined to be a city), travel through the edges visiting every other node exactly once and eventually return to the original vertex. This path is today commonly referred to as a Hamiltonian Cycle. In fact, for those of you who are aware of the famous Traveling Salesman Problem (TSP), it is believed that the origins of the TSP can be traced back to this very Icosian Game formulated by Hamilton. The figure below shows a solution of the puzzle. 

Today, there are only three known examples of this puzzle in the world, a picture of one of which is provided below.

The puzzle today has seen major revisions and modifications and and forms the basis for one of the most intellectually challenging problems of our time.

 

References

[1] http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg

[2] http://www.ime.usp.br/~yw/2013/grafinhos/aulas/Paper-Euler-Letters.pdf

[3] http://kursinfo.himolde.no/lo-kurs/lo904/Laporte/BridgesPaper.pdf

[4] http://en.wikipedia.org/wiki/Icosian_game

[5] http://puzzlemuseum.com/month/picm02/200207icosian.htm

Cliodynamics

Let’s begin this article by quoting the wikipedia ‘definition’ of Cliodynamics

Cliodynamics is a multidisciplinary area of research focussed at the modeling of historical dynamics [1]

I first chanced upon this field while discussing Hari Seldon’s ‘Psychohistory’ with a friend. For those of you who aren’t aware of Psychohistory, it is (as Wikipedia again, very succinctly puts it) ‘a fictional science in Isaac Asimov’s Foundation universe which combines history, sociology, and mathematical statistics to make general predictions about the future behavior of very large groups of people, such as the Galactic Empire.’

The basic idea sounds simple enough. You combine all possible available data regarding humanity at any point in time and use it to form predictions about it’s future. Psychohistory is probably one of the earliest mentions of Social Network Analysis in literature. When you think about it, the very fact that Asimov managed to conceive such an idea is, in itself, a monumental act. Psychohistory, much like the modern extension in Social Network Analysis, is centered around the idea that all interaction that takes place amongst human beings leads to (observable or unobservable) consequences. However, in order to be able to decide what the consequences are, one must be able to accurately model the driving forces that result in them. Hari Seldon, who is a fictional character in Asimov’s Foundation series, develops such a mathematical model and presents the ‘Prime Radian’,  a device that stores the psychohistorical equations showing the future development of humanity.

At first look, everything that I’ve mentioned above seems (and feels) like exactly what it is – pure fiction! But it’s really not that far fetched a concept. Historical processes are quite dynamic and quite interconnected. Nations rise and fall and cultures prosper and fade. Everything, however, is governed by some form of a hidden law that manifests itself only in the form of evolutionary time. Let’s consider a very small example. One of the primary reasons for the immediate and enthused entry of the United States into the foray of the II World War was due to the bombing of Pearl Harbor. USA retaliated, and the rest is history. These events could very well be modeled as as directed graph with each node representing an event. Modeling history this way also helps determining possible alternative outcomes as this would simply be, an alternative realization of the same graph. Pretty cool, right?

Let’s return to the original topic of discussion, a.k.a., Cliodynamics.  The term was originally coined by Peter Turchin in 2003. Turchin developed a theory by which large historical empires evolve by the mechanism of multilevel selection [2].  One of the most principal ideas behind this theory is that oftentimes, individuals who form groups interact among each other much more than with individuals from a different group. This leads to the idea that the original theory of Natural Selection must take place on a multi-dimensional plane, in that the survival of the fittest has a lot more to do with group dynamics than with the individual’s behavior.  To put it in plainer terms, this means that how you perform as a group has more of bearing on your survival rate as an individual.  One of Turchin’s most important hypothesis is that population pressure causes increased warfare.  After certain revisions, he proposes that that population and warfare are two aspects of a nonlinear dynamical system, in which population growth leads to increased warfare, but increased warfare in turn causes population numbers to decline [3].

Turchin’s approach – which he calls Cliodynamics, after Clio, the muse of history in Greek Mythology – is focussed on studying those quintessential forces that must exist in order to drive historical time. It is an attempt to show that History is a chain of well defined events, as opposed to a popular belief among sociologists that historical events ‘just happened’.

Of course, while this is a very interesting and possibly exciting field to be following, it must be noted that this approach of modeling history will not be able to tell us how and when a certain event, say a war, will occur. It will only be able to tell us the most likely cause for the occurrence of a certain historical event. In this vein, therefore, the development of a device akin to Hari Seldon’s ‘Prime Radian’ remains something of a distant dream.

References

[1] http://en.wikipedia.org/wiki/Cliometrics

[2] Turchin P. 2009. A Theory for Formation of Large EmpiresJournal of Global History 4:191-207.

[3] Turchin P, Korotayev A (2007) Population dynamics and internal warfare: a reconsideration.