|Part of a series on|
The moment magnitude scale (MMS; denoted explicitly with Mw or Mw, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment (a measure of the "work" done by the earthquake), expressed in terms of the familiar magnitudes of the original "Richter" magnitude scale.
Moment magnitude (Mw) is considered the authoritative magnitude scale for ranking earthquakes by size because it is more directly related to the energy of an earthquake, and does not saturate. (That is, it does not underestimate magnitudes like other scales do in certain conditions.) It has become the standard scale used by seismological authorities (such as the U.S. Geological Survey), replacing (when available, typically for M > 4) use of the ML (Local magnitude) and Ms (surface-wave magnitude) scales. Subtypes of the moment magnitude scale (Mww , etc.) reflect different ways of estimating the seismic moment.
At the beginning of the previous century very little was known about how earthquakes happen, how seismic waves are generated and propagate through the earth's crust, and what they can tell us about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how adjust for epicentral distance (and some other factors) so that the logarithm of the trace amplitude could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the now familiar ten-fold (exponential) scaling of each degree of magnitude, and in 1935 published his "magnitude" scale, now called the Local magnitude scale, labeled ML .
The Local magnitude scale was developed on the basis of shallow (~15 km deep), moderate-sized earthquakes at a distance of approximately 100 to 600 kilometers, conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the Local magnitude scale underestimates the magnitude, a problem called saturation. Additional scales were developed – a surface-wave magnitude scale (Ms) by Beno Gutenberg in 1945, a body-wave magnitude scale (mB) by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the ML scale, but all are subject to saturation. A particular problem was that the Ms scale (which in the 1970s was the preferred magnitude scale) saturates around Ms 8.0, and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had Ms magnitudes of 8.5 and 8.4 (resp.) but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3.
A single force acting on an object with sufficient strength to overcome any resistance will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will tend to cancel; if they cancel exactly there will be no translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a couple, also simple couple or single couple. If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a double couple. A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles."
The single couple and double couple models are important in seismology because each can be used to derive how an earthquake rupture generates seismic waves. Once that relation is understood it can be inverted to use the earthquake's observed seismic waves to determine its other characteristics, including fault geometry and seismic moment.
Mathematical study of earthquakes began in 1897 when Robert Oldham confirmed that the Earth is effectively elastic, and thus previous work in elasticity theory on the generation and propagation of waves could be applied to the study of seismic waves. Various theoretical advances followed, including the Italian Vito Volterra's theory of dislocations in 1907, which much later was used A. V. Vvedenskaya (in 1956) and J.A. Steketee (in 1958) to model seismic sources as "dislocations" (slippage) on a fault.
In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a three decade long controversy over the best way to model the seismic source: as a single couple, or a double couple? While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some short-comings, it seemed more intuitive, and there was a belief — mistaken, as it turned out — that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their S-waves, but the quality of the observational data was inadequate for that.
The debate ended when Maruyama (1963), Haskell (1964), and Burridge & Knopoff (1964) showed that if earthquake ruptures are modeled as dislocations the pattern of seismic radiation can always be matched with an equivalent pattern derived from a double couple, but not from a single couple. This was confirmed in 1966 when Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation.
An earthquake can be modeled as a dislocation (a rupture accompanied by slipping), which is mathematically equivalent to a double couple (a pair of force couples). The total (net) force and total moment of a double couple are zero, because the force components are equal and opposite, and therefore cancel. The magnitude of the opposing forces is the seismic moment, symbol M0. This was first derived theoretically by the Russian geophysicist A. V. Vvedenskaya.
The first calculation of an earthquake's seismic moment was in 1966 by Keiiti Aki, a professor of geophysics at the Massachusetts Institute of Technology. Using detailed field studies of the 1964 Niigata earthquake and data from a new generation of seimographs in the World-Wide Standardized Seismograph Network (WWSSN), he first confirmed that an earthquake is "a release of accumulated strain energy by a rupture", and that this can be modeled by a double couple. With further analysis he showed how the energy radiated by seismic waves can be used to estimate the energy released by the earthquake. This was done using seismic moment, defined as
with μ being the rigidity (or resistance) of moving a fault with a surface areas of S over an average dislocation (distance) of ū. (Modern formulations replace μūS with the equivalent D̄A, known as the "geometric moment" or "potency"..) By this equation the moment determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation.
Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the earth's crust. By means of the double couple equivalence it is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat).
Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters."
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of Ms . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an Ms 8.2. Caltech seismologist Hiroo Kanamori recognized this deficiency and he took the simple but important step of defining a magnitude based on estimates of radiated energy, Mw , where the "w" stood for work (energy):
Kanamori recognized that measurement of radiated energy is technically difficult since it involves integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, M0 . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
(where E is in Joules and M0 is in Nm), Kanamori approximated Mw by
The formula above made it much easier to estimate the energy-based magnitude Mw , but it changed the fundamental nature of the scale into a moment magnitude scale. Caltech seismologist Thomas C. Hanks noted that Kanamori's Mw scale was very similar to a relationship between ML and M0 that was reported by Thatcher & Hanks (1973)
Hanks & Kanamori (1979) combined their work to define a new magnitude scale based on estimates of seismic moment
where is defined in newton meters (N·m).
Although the formal definition of moment magnitude is given by this paper and is designated by M, it has been common for many authors to refer to Mw as moment magnitude. In most of these cases, they are actually referring to moment magnitude M as defined above.
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes,[scientific citation needed] but in practice, seismic moment, the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Current practice in official[who?] earthquake reports is to adopt moment magnitude as the preferred magnitude, i.e., Mw is the official magnitude reported whenever it can be computed. Because seismic moment (M0 , the quantity needed to compute Mw ) is not measured if the earthquake is too small, the reported magnitude for earthquakes smaller than M 4 is often Richter's ML .
Popular press reports most often deal with significant earthquakes larger than M ~ 4. For these events, the official[who?] magnitude is the moment magnitude Mw , not Richter's local magnitude ML .
where M0 is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the Local Magnitude and the Surface Wave magnitude.
Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion of this stored energy is transformed into
The potential energy drop caused by an earthquake is related approximately to its seismic moment by
where is the average of the absolute shear stresses on the fault before and after the earthquake (e.g., equation 3 of Venkataraman & Kanamori 2004) and is the average of the shear moduli of the rocks that constitute the fault. Currently, there is no technology to measure absolute stresses at all depths of interest, nor method to estimate it accurately, and is thus poorly known. It could vary highly from one earthquake to another. Two earthquakes with identical but different would have released different .
The radiated energy caused by an earthquake is approximately related to seismic moment by
where is radiated efficiency and is the static stress drop, i.e., the difference between shear stresses on the fault before and after the earthquake (e.g., from equation 1 of Venkataraman & Kanamori 2004). These two quantities are far from being constants. For instance, depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical but different or would have radiated different .
Because and are fundamentally independent properties of an earthquake source, and since can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the energy magnitude
where is in J (N·m).
Assuming the values of σ̄/μ are the same for all earthquakes, one can consider Mw as a measure of the potential energy change ΔW caused by earthquakes. Similarly, if one assumes is the same for all earthquakes, one can consider Mw as a measure of the energy Es radiated by earthquakes.
Under these assumptions, the following formula, obtained by solving for M0 the equation defining Mw , allows one to assess the ratio of energy release (potential or radiated) between two earthquakes of different moment magnitudes, and :
As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1000 times increase in energy. Thus, an earthquake of Mw of 7.0 contains 1000 times as much energy as one of 5.0 and about 32 times that of 6.0.
Various ways of determining moment magnitude have been developed, and several subtypes of the Mw scale can be used to indicate the basis used.
That original scale has been tweaked through the decades, and nowadays calling it the "Richter scale" is an anachronism. The most common measure is known simply as the moment magnitude scale..