The Darwin instability is an orbital instability which occurs in binary systems where viscous dissipation exists. It is named after G. H. Darwin, who analyzed this instability in his paper from 1879.
Consider a binary system with a separation . A key assumption is that the components are tidally locked - their rotation period about their axis is the same as the orbital period. This assumption is valid if the tidal synchronization timescale is sufficiently short to other orbital evolution timescales that characterize the system. If the total mass of the system is , the orbital angular frequency is given by (Keplerian orbit):
If the moments of inertia of the two components are and , then the total angular momentum of the system is given by:
where is the reduced mass and .
The effective moment of inertia of the system is given by:
where we have taken the derivative . Note that we have also assumed that the components' moments of inertia are independent of - meaning that the shapes of the components remain the same as the change their rotational frequency.
We see that for large orbital separations, , implying that decrease in the total angular momentum results in an increase in , like in Kepler's problem. At this regime, if some angular momentum is removed from the orbit, the orbit shrinks, the orbital angular frequency increases, and the tides spin up the components, transferring angular momentum from the orbit to the components (and hence decreasing the orbit), bringing the system to a synchronized state again.
At sufficiently small orbital separation - , the synchronized system has a local minimum of angular momentum (). Removal of angular momentum from the orbit would increase the orbital angular frequency, as before, but the total angular momentum of the system can no longer maintain synchronization. The tidal spin up of the components would lead to an ever growing orbital angular frequency, decreasing the orbital separation, up to the binary merger.