This article is about an area of scientific study. For other uses, see Mechanic (disambiguation). MECHA
Thermodynamics
From Wikipedia, the free encyclopedia
Annotated color version of the original 1824
Carnot heat engine
showing the hot body (boiler), working body (system, steam), and cold
body (water), the letters labeled according to the stopping points in
Carnot cycle
Thermodynamics is a branch of
natural science concerned with
heat and
temperature and their relation to
energy and
work. It defines
macroscopic variables, such as
internal energy,
entropy, and
pressure,
that partly describe a body of matter or radiation. It states that the
behavior of those variables is subject to general constraints, that are
common to all materials, not the peculiar properties of particular
materials. These general constraints are expressed in the four laws of
thermodynamics. Thermodynamics describes the bulk behavior of the body,
not the microscopic behaviors of the very large numbers of its
microscopic constituents, such as molecules. Its laws are explained by
statistical mechanics, in terms of the microscopic constituents.
Thermodynamics applies to a wide variety of topics in
science and
engineering.
Historically, thermodynamics developed out of a desire to increase the
efficiency and power output of early
steam engines, particularly through the work of French physicist
Nicolas Léonard Sadi Carnot (1824) who believed that the efficiency of heat engines was the key that could help France win the
Napoleonic Wars.
[1] Irish-born British physicist
Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854:
[2]
"Thermo-dynamics is the subject of the relation of heat to forces
acting between contiguous parts of bodies, and the relation of heat to
electrical agency."
Initially, thermodynamics, as applied to heat engines, was concerned
with the thermal properties of their 'working materials' such as steam,
in an effort to increase the efficiency and power output of engines.
Thermodynamics later expanded to the study of energy transfers in
chemical processes, for example to the investigation, published in 1840,
of the heats of chemical reactions
[3] by
Germain Hess,
which was not originally explicitly concerned with the relation between
energy exchanges by heat and work. From this evolved the study of
Chemical thermodynamics and the role of
entropy in
chemical reactions.
[4][5][6][7][8][9][10][11][12]
Introduction
The plain term 'thermodynamics' refers to a macroscopic description of bodies and processes.
[13] "Any reference to atomic constitution is foreign to classical thermodynamics."
[14]
The qualified term 'statistical thermodynamics' refers to descriptions
of bodies and processes in terms of the atomic constitution of matter,
mainly described by sets of items all alike, so as to have equal
probabilities.
Thermodynamics arose from the study of two distinct kinds of transfer of energy, as
heat and as
work, and the relation of those to the system's macroscopic variables of volume, pressure and temperature.
[15][16]
Thermodynamic equilibrium is one of the most important concepts for thermodynamics.
[17]
The temperature of a thermodynamic system is well defined, and is
perhaps the most characteristic quantity of thermodynamics. As the
systems and processes of interest are taken further from thermodynamic
equilibrium, their exact thermodynamical study becomes more difficult.
Relatively simple approximate calculations, however, using the variables
of equilibrium thermodynamics, are of much practical value. In many
important practical cases, as in heat engines or refrigerators, the
systems consist of many subsystems at different temperatures and
pressures. In practice, thermodynamic calculations deal effectively with
these complicated dynamic systems provided the equilibrium
thermodynamic variables are nearly enough well-defined.
Central to thermodynamic analysis are the definitions of the
system, which is of interest, and of its
surroundings.
[8][18]
The surroundings of a thermodynamic system consist of physical devices
and of other thermodynamic systems that can interact with it. An example
of a thermodynamic surrounding is a heat bath, which is held at a
prescribed temperature, regardless of how much heat might be drawn from
it.
There are three fundamental kinds of physical entities in thermodynamics,
states of a system,
thermodynamic processes of a system, and
thermodynamic operations.
This allows two fundamental approaches to thermodynamic reasoning, that
in terms of states of a system, and that in terms of cyclic processes
of a system.
A thermodynamic system can be defined in terms of its states. In this way, a thermodynamic system is a
macroscopic
physical object, explicitly specified in terms of macroscopic physical
and chemical variables that describe its macroscopic properties. The
macroscopic state variables of thermodynamics have been recognized in the course of empirical work in physics and chemistry.
[9]
A thermodynamic operation is an artificial physical manipulation that
changes the definition of a system or its surroundings. Usually it is a
change of the permeability or some other feature of a wall of the
system.,
[19]
that allows energy (as heat or work) or matter (mass) to be exchanged
with the environment For example, the partition between two
thermodynamic systems can be removed so as to produce a single system. A
thermodynamic operation usually leads to a thermodynamic process of
transfer of mass or energy that changes the state of the system, and the
transfer occurs in natural accord with the laws of thermodynamics.
Thermodynamic operations are not the only initiators of thermodynamic
processes. Also of course changes in the intensive or extensive
variables of the surroundings can initiate thermodynamic processes.
A thermodynamic system can also be defined in terms of the
cyclic processes that it can undergo.
[20]
A cyclic process is a cyclic sequence of thermodynamic operations and
processes that can be repeated indefinitely often without changing the
final state of the system.
For thermodynamics and statistical thermodynamics to apply to a
system subjected to a process, it is necessary that the atomic
mechanisms of the process fall into one of two classes:
- those so rapid that, in the time frame of the process of interest,
the atomic states effectively visit all of their accessible range,
bringing the system to its state of internal thermodynamic equilibrium;
and
- those so slow that their progress can be neglected in the time frame of the process of interest.[21][22]
The rapid atomic mechanisms represent the internal energy of the
system. They mediate the macroscopic changes that are of interest for
thermodynamics and statistical thermodynamics, because they quickly
bring the system near enough to thermodynamic equilibrium. "When
intermediate rates are present, thermodynamics and statistical mechanics
cannot be applied."
[21]
Such intermediate rate atomic processes do not bring the system near
enough to thermodynamic equilibrium in the time frame of the macroscopic
process of interest. This separation of time scales of atomic processes
is a theme that recurs throughout the subject.
For example, classical thermodynamics is characterized by its study of materials that have
equations of state or characteristic equations.
They express equilibrium relations between macroscopic mechanical
variables and temperature and internal energy. They express the
constitutive peculiarities of the material of the system. A classical
material can usually be described by a function that makes pressure
dependent on volume and temperature, the resulting pressure being
established much more rapidly than any imposed change of volume or
temperature.
[23][24][25][26]
The present article takes a gradual approach to the subject, starting
with a focus on cyclic processes and thermodynamic equilibrium, and
then gradually beginning to further consider non-equilibrium systems.
Thermodynamic facts can often be explained by viewing macroscopic objects as assemblies of very many microscopic or
atomic objects that obey
Hamiltonian dynamics.
[8][27][28]
The microscopic or atomic objects exist in species, the objects of each
species being all alike. Because of this likeness, statistical methods
can be used to account for the macroscopic properties of the
thermodynamic system in terms of the properties of the microscopic
species. Such explanation is called
statistical thermodynamics; also often it is referred to by the term '
statistical mechanics',
though this term can have a wider meaning, referring to 'microscopic
objects', such as economic quantities, that do not obey Hamiltonian
dynamics.
[27]
History
The
history of thermodynamics as a scientific discipline generally begins with
Otto von Guericke who, in 1650, built and designed the world's first
vacuum pump and demonstrated a
vacuum using his
Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove
Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the physicist and chemist
Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with scientist
Robert Hooke, built an air pump.
[29] Using this pump, Boyle and Hooke noticed a correlation between
pressure,
temperature, and
volume. In time,
Boyle's Law was formulated, stating that for a gas at constant temperature, its pressure and volume are
inversely proportional. In 1679, based on these concepts, an associate of Boyle's named
Denis Papin built a
steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine
from exploding. By watching the valve rhythmically move up and down,
Papin conceived of the idea of a piston and a cylinder engine. He did
not, however, follow through with his design. Nevertheless, in 1697,
based on Papin's designs, engineer
Thomas Savery built the first engine, followed by
Thomas Newcomen
in 1712. Although these early engines were crude and inefficient, they
attracted the attention of the leading scientists of the time.
The concepts of
heat capacity and
latent heat, which were necessary for development of thermodynamics, were developed by professor
Joseph Black at the University of Glasgow, where
James Watt
worked as an instrument maker. Watt consulted with Black on tests of
his steam engine, but it was Watt who conceived the idea of the
external condenser, greatly raising the
steam engine's efficiency.
[30] Drawing on all the previous work led
Sadi Carnot, the "father of thermodynamics", to publish
Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The paper outlined the basic energetic relations between the
Carnot engine, the
Carnot cycle, and
motive power. It marked the start of thermodynamics as a modern science.
[11]
The first thermodynamic textbook was written in 1859 by
William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the
University of Glasgow.
[31] The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of
William Rankine,
Rudolf Clausius, and
William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as
James Clerk Maxwell,
Ludwig Boltzmann,
Max Planck,
Rudolf Clausius and
J. Willard Gibbs.
From 1873 to '76, the American mathematical physicist
Josiah Willard Gibbs published a series of three papers, the most famous being "
On the equilibrium of heterogeneous substances".
[4] Gibbs showed how
thermodynamic processes, including
chemical reactions, could be graphically analyzed. By studying the
energy,
entropy,
volume,
chemical potential,
temperature and
pressure of the
thermodynamic system, one can determine if a process would occur spontaneously.
[32] Chemical thermodynamics was further developed by
Pierre Duhem,
[5] Gilbert N. Lewis,
Merle Randall,
[6] and
E. A. Guggenheim,
[7][8] who applied the mathematical methods of Gibbs.
The lifetimes of some of the most important contributors to thermodynamics.
Etymology
The etymology of
thermodynamics has an intricate history. It was first spelled in a hyphenated form as an adjective (
thermo-dynamic) and from 1854 to 1868 as the noun
thermo-dynamics to represent the science of generalized heat engines.
The components of the word
thermo-dynamic are derived from the
Greek words
θέρμη therme, meaning "heat," and
δύναμις dynamis, meaning "power" (Haynie claims that the word was coined around 1840).
[33][34]
Pierre Perrot claims that the term
thermodynamics was coined by
James Joule in 1858 to designate the science of relations between heat and power.
[11] Joule, however, never used that term, but used instead the term
perfect thermo-dynamic engine in reference to Thomson’s 1849
[35] phraseology.
By 1858,
thermo-dynamics, as a functional term, was used in
William Thomson's paper
An Account of Carnot's Theory of the Motive Power of Heat.[35]
Branches of description
Thermodynamic systems are theoretical constructions used to model
physical systems that exchange matter and energy in terms of the
laws of thermodynamics.
The study of thermodynamical systems has developed into several related
branches, each using a different fundamental model as a theoretical or
experimental basis, or applying the principles to varying types of
systems.
Classical thermodynamics
Classical thermodynamics accounts for the adventures of a thermodynamic system in terms, either of its
time-invariant
equilibrium states, or else of its continually repeated cyclic
processes, but, formally, not both in the same account. It uses only
time-invariant, or equilibrium, macroscopic quantities measureable in
the laboratory, counting as time-invariant a long-term time-average of a
quantity, such as a flow, generated by a continually repetitive
process.
[36][37]
In classical thermodynamics, rates of change are not admitted as
variables of interest. An equilibrium state stands endlessly without
change over time, while a continually repeated cyclic process runs
endlessly without a net change on the system over time.
In the account in terms of equilibrium states of a system, a state of
thermodynamic equilibrium in a simple system is spatially homogeneous.
In the classical account solely in terms of a cyclic process, the
spatial interior of the 'working body' of that process is not
considered; the 'working body' thus does not have a defined internal
thermodynamic state of its own because no assumption is made that it
should be in thermodynamic equilibrium; only its inputs and outputs of
energy as heat and work are considered.
[38]
It is common to describe a cycle theoretically as composed of a
sequence of very many thermodynamic operations and processes. This
creates a link to the description in terms of equilibrium states. The
cycle is then theoretically described as a continuous progression of
equilibrium states.
Classical thermodynamics was originally concerned with the
transformation of energy in a cyclic process, and the exchange of energy
between closed systems defined only by their equilibrium states. The
distinction between transfers of energy as heat and as work was central.
As classical thermodynamics developed, the distinction between heat
and work became less central. This was because there was more interest
in open systems, for which the distinction between heat and work is not
simple, and is beyond the scope of the present article. Alongside the
amount of heat transferred as a fundamental quantity, entropy was
gradually found to be a more generally applicable concept, especially
when considering chemical reactions.
Massieu
in 1869 considered entropy as the basic dependent thermodynamic
variable, with energy potentials and the reciprocal of the thermodynamic
temperature as fundamental independent variables.
Massieu functions can be useful in present-day non-equilibrium thermodynamics. In 1875, in the work of
Josiah Willard Gibbs, entropy was considered a fundamental independent variable, while internal energy was a dependent variable.
[39]
All actual physical processes are to some degree irreversible.
Classical thermodynamics can consider irreversible processes, but its
account in exact terms is restricted to variables that refer only to
initial and final states of thermodynamic equilibrium, or to rates of
input and output that do not change with time. For example, classical
thermodynamics can consider time-average rates of flows generated by
continually repeated irreversible cyclic processes. Also it can consider
irreversible changes between equilibrium states of systems consisting
of several phases (as defined below in this article), or with removable
or replaceable partitions. But for systems that are described in terms
of equilibrium states, it considers neither flows, nor spatial
inhomogeneities in simple systems with no externally imposed force
fields such as gravity. In the account in terms of equilibrium states of
a system, descriptions of irreversible processes refer only to initial
and final static equilibrium states; the time it takes to change
thermodynamic state is not considered.
[40][41]
Local equilibrium thermodynamics
Local equilibrium thermodynamics is concerned with the time courses
and rates of progress of irreversible processes in systems that are
smoothly spatially inhomogeneous. It admits time as a fundamental
quantity, but only in a restricted way. Rather than considering
time-invariant flows as long-term-average rates of cyclic processes,
local equilibrium thermodynamics considers time-varying flows in systems
that are described by states of
local thermodynamic equilibrium, as follows.
For processes that involve only suitably small and smooth spatial
inhomogeneities and suitably small changes with time, a good
approximation can be found through the assumption of local thermodynamic
equilibrium. Within the large or global region of a process, for a
suitably small local region, this approximation assumes that a quantity
known as the entropy of the small local region can be defined in a
particular way. That particular way of definition of entropy is largely
beyond the scope of the present article, but here it may be said that it
is entirely derived from the concepts of classical thermodynamics; in
particular, neither flow rates nor changes over time are admitted into
the definition of the entropy of the small local region. It is assumed
without proof that the instantaneous global entropy of a non-equilibrium
system can be found by adding up the simultaneous instantaneous
entropies of its constituent small local regions. Local equilibrium
thermodynamics considers processes that involve the time-dependent
production of entropy by dissipative processes, in which kinetic energy
of bulk flow and chemical potential energy are converted into internal
energy at time-rates that are explicitly accounted for. Time-varying
bulk flows and specific diffusional flows are considered, but they are
required to be dependent variables, derived only from material
properties described only by static macroscopic equilibrium states of
small local regions. The independent state variables of a small local
region are only those of classical thermodynamics.
Generalized or extended thermodynamics
Like local equilibrium thermodynamics, generalized or extended
thermodynamics also is concerned with the time courses and rates of
progress of irreversible processes in systems that are smoothly
spatially inhomogeneous. It describes time-varying flows in terms of
states of suitably small local regions within a global region that is
smoothly spatially inhomogeneous, rather than considering flows as
time-invariant long-term-average rates of cyclic processes. In its
accounts of processes, generalized or extended thermodynamics admits
time as a fundamental quantity in a more far-reaching way than does
local equilibrium thermodynamics. The states of small local regions are
defined by macroscopic quantities that are explicitly allowed to vary
with time, including time-varying flows. Generalized thermodynamics
might tackle such problems as ultrasound or shock waves, in which there
are strong spatial inhomogeneities and changes in time fast enough to
outpace a tendency towards local thermodynamic equilibrium. Generalized
or extended thermodynamics is a diverse and developing project, rather
than a more or less completed subject such as is classical
thermodynamics.
[42][43]
For generalized or extended thermodynamics, the definition of the
quantity known as the entropy of a small local region is in terms beyond
those of classical thermodynamics; in particular, flow rates are
admitted into the definition of the entropy of a small local region. The
independent state variables of a small local region include flow rates,
which are not admitted as independent variables for the small local
regions of local equilibrium thermodynamics.
Outside the range of classical thermodynamics, the definition of the
entropy of a small local region is no simple matter. For a thermodynamic
account of a process in terms of the entropies of small local regions,
the definition of entropy should be such as to ensure that the second
law of thermodynamics applies in each small local region. It is often
assumed without proof that the instantaneous global entropy of a
non-equilibrium system can be found by adding up the simultaneous
instantaneous entropies of its constituent small local regions. For a
given physical process, the selection of suitable independent local
non-equilibrium macroscopic state variables for the construction of a
thermodynamic description calls for qualitative physical understanding,
rather than being a simply mathematical problem concerned with a
uniquely determined thermodynamic description. A suitable definition of
the entropy of a small local region depends on the physically insightful
and judicious selection of the independent local non-equilibrium
macroscopic state variables, and different selections provide different
generalized or extended thermodynamical accounts of one and the same
given physical process. This is one of the several good reasons for
considering entropy as an epistemic physical variable, rather than as a
simply material quantity. According to a respected author: "There is no
compelling reason to believe that the classical thermodynamic entropy is
a measurable property of nonequilibrium phenomena, ..."
[44]
Statistical thermodynamics
Statistical thermodynamics,
also called statistical mechanics, emerged with the development of
atomic and molecular theories in the second half of the 19th century and
early 20th century. It provides an explanation of classical
thermodynamics. It considers the microscopic interactions between
individual particles and their collective motions, in terms of classical
or of quantum mechanics. Its explanation is in terms of statistics that
rest on the fact the system is composed of several species of particles
or collective motions, the members of each species respectively being
in some sense all alike.
Thermodynamic equilibrium
Equilibrium thermodynamics
studies transformations of matter and energy in systems at or near
thermodynamic equilibrium. In thermodynamic equilibrium, a system's
properties are, by definition, unchanging in time. In thermodynamic
equilibrium no macroscopic change is occurring or can be triggered;
within the system, every microscopic process is balanced by its
opposite; this is called the principle of detailed balance. A central
aim in equilibrium thermodynamics is: given a system in a well-defined
initial state, subject to specified constraints, to calculate what the
equilibrium state of the system is.
[45]
In theoretical studies, it is often convenient to consider the
simplest kind of thermodynamic system. This is defined variously by
different authors.
[40][46][47][48][49][50]
For the present article, the following definition is convenient, as
abstracted from the definitions of various authors. A region of material
with all intensive properties continuous in space and time is called a
phase. A simple system is for the present article defined as one that
consists of a single phase of a pure chemical substance, with no
interior partitions.
Within a simple isolated thermodynamic system in thermodynamic
equilibrium, in the absence of externally imposed force fields, all
properties of the material of the system are spatially homogeneous.
[51] Much of the basic theory of thermodynamics is concerned with homogeneous systems in thermodynamic equilibrium.
[4][52]
Most systems found in nature or considered in engineering are not in
thermodynamic equilibrium, exactly considered. They are changing or can
be triggered to change over time, and are continuously and
discontinuously subject to flux of matter and energy to and from other
systems.
[53]
For example, according to Callen, "in absolute thermodynamic
equilibrium all radioactive materials would have decayed completely and
nuclear reactions would have transmuted all nuclei to the most stable
isotopes. Such processes, which would take cosmic times to complete,
generally can be ignored.".
[53]
Such processes being ignored, many systems in nature are close enough
to thermodynamic equilibrium that for many purposes their behaviour can
be well approximated by equilibrium calculations.
Quasi-static transfers between simple systems are nearly in thermodynamic equilibrium and are reversible
It very much eases and simplifies theoretical thermodynamical studies
to imagine transfers of energy and matter between two simple systems
that proceed so slowly that at all times each simple system considered
separately is near enough to thermodynamic equilibrium. Such processes
are sometimes called quasi-static and are near enough to being
reversible.
[54][55]
Natural processes are partly described by tendency towards thermodynamic equilibrium and are irreversible
If not initially in thermodynamic equilibrium, simple isolated
thermodynamic systems, as time passes, tend to evolve naturally towards
thermodynamic equilibrium. In the absence of externally imposed force
fields, they become homogeneous in all their local properties. Such
homogeneity is an important characteristic of a system in thermodynamic
equilibrium in the absence of externally imposed force fields.
Many thermodynamic processes can be modeled by compound or composite
systems, consisting of several or many contiguous component simple
systems, initially not in thermodynamic equilibrium, but allowed to
transfer mass and energy between them. Natural thermodynamic processes
are described in terms of a tendency towards thermodynamic equilibrium
within simple systems and in transfers between contiguous simple
systems. Such natural processes are irreversible.
[56]
Non-equilibrium thermodynamics
Non-equilibrium thermodynamics[57] is a branch of thermodynamics that deals with systems that are not in
thermodynamic equilibrium;
it is also called thermodynamics of irreversible processes.
Non-equilibrium thermodynamics is concerned with transport processes and
with the rates of chemical reactions.
[58]
Non-equilibrium systems can be in stationary states that are not
homogeneous even when there is no externally imposed field of force; in
this case, the description of the internal state of the system requires a
field theory.
[59][60][61]
One of the methods of dealing with non-equilibrium systems is to
introduce so-called 'internal variables'. These are quantities that
express the local state of the system, besides the usual local
thermodynamic variables; in a sense such variables might be seen as
expressing the 'memory' of the materials.
Hysteresis
may sometimes be described in this way. In contrast to the usual
thermodynamic variables, 'internal variables' cannot be controlled by
external manipulations.
[62] This approach is usually unnecessary for gases and liquids, but may be useful for solids.
[63] Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
Laws of thermodynamics
Thermodynamics states a set of four laws that are valid for all
systems that fall within the constraints implied by each. In the various
theoretical descriptions of thermodynamics these laws may be expressed
in seemingly differing forms, but the most prominent formulations are
the following:
- Zeroth law of thermodynamics: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
This statement implies that thermal equilibrium is an
equivalence relation on the set of
thermodynamic systems
under consideration. Systems are said to be in thermal equilibrium with
each other if spontaneous molecular thermal energy exchanges between
them do not lead to a net exchange of energy. This law is tacitly
assumed in every measurement of temperature. For two bodies known to be
at the same
temperature,
deciding if they are in thermal equilibrium when put into thermal
contact does not require actually bringing them into contact and
measuring any changes of their observable properties in time.
[64]
In traditional statements, the law provides an empirical definition of
temperature and justification for the construction of practical
thermometers. In contrast to absolute thermodynamic temperatures,
empirical temperatures are measured just by the mechanical properties of
bodies, such as their volumes, without reliance on the concepts of
energy, entropy or the first, second, or third laws of thermodynamics.
[48][65] Empirical temperatures lead to
calorimetry for heat transfer in terms of the mechanical properties of bodies, without reliance on mechanical concepts of energy.
The physical content of the zeroth law has long been recognized. For example,
Rankine
in 1853 defined temperature as follows: "Two portions of matter are
said to have equal temperatures when neither tends to communicate heat
to the other."
[66] Maxwell in 1872 stated a "Law of Equal Temperatures".
[67] He also stated: "All Heat is of the same kind."
[68] Planck explicitly assumed and stated it in its customary present-day wording in his formulation of the first two laws.
[69]
By the time the desire arose to number it as a law, the other three had
already been assigned numbers, and so it was designated the
zeroth law.
The first law of thermodynamics asserts the existence of a state
variable for a system, the internal energy, and tells how it changes in
thermodynamic processes. The law allows a given internal energy of a
system to be reached by any combination of heat and work. It is
important that internal energy is a variable of state of the system (see
Thermodynamic state) whereas heat and work are variables that describe processes or changes of the state of systems.
The first law observes that the internal energy of an isolated system obeys the principle of
conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
[81][82][83][84][85]
The second law of thermodynamics is an expression of the universal
principle of dissipation of kinetic and potential energy observable in
nature. The second law is an observation of the fact that over time,
differences in temperature, pressure, and chemical potential tend to
even out in a physical system that is isolated from the outside world.
Entropy
is a measure of how much this process has progressed. The entropy of an
isolated system that is not in equilibrium tends to increase over time,
approaching a maximum value at equilibrium.
In classical thermodynamics, the second law is a basic postulate
applicable to any system involving heat energy transfer; in statistical
thermodynamics, the second law is a consequence of the assumed
randomness of molecular chaos. There are many versions of the second
law, but they all have the same effect, which is to explain the
phenomenon of
irreversibility in nature.
The third law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching
absolute zero
of temperature. This law provides an absolute reference point for the
determination of entropy. The entropy determined relative to this point
is the absolute entropy. Alternate definitions are, "the entropy of all
systems and of all states of a system is smallest at absolute zero," or
equivalently "it is impossible to reach the absolute zero of temperature
by any finite number of processes".
Absolute zero is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit) or 0 K (kelvin).
System models
Types of transfers permitted
in a thermodynamic process
for a type of partition
type of partition |
type of transfer |
|
Mass
and energy |
Work |
Heat |
permeable to matter |
+ |
0 |
0 |
permeable to energy but
impermeable to matter |
0 |
+ |
+ |
adiabatic |
0 |
+ |
0 |
adynamic and
impermeable to matter |
0 |
0 |
+ |
isolating |
0 |
0 |
0 |
A diagram of a generic thermodynamic system
An important concept in thermodynamics is the
thermodynamic system, a precisely defined region of the universe under study. Everything in the universe except the system is known as the
surroundings. A system is separated from the remainder of the universe by a
boundary, which may be actual, or merely notional and fictive, but by convention delimits a finite volume. Transfers of
work,
heat, or
matter
between the system and the surroundings take place across this
boundary. The boundary may or may not have properties that restrict what
can be transferred across it. A system may have several distinct
boundary sectors or partitions separating it from the surroundings, each
characterized by how it restricts transfers, and being permeable to its
characteristic transferred quantities.
The volume can be the region surrounding a single atom resonating energy, as
Max Planck defined in 1900;
[citation needed] it can be a body of steam or air in a
steam engine, such as
Sadi Carnot defined in 1824; it can be the body of a
tropical cyclone, such as
Kerry Emanuel theorized in 1986 in the field of
atmospheric thermodynamics; it could also be just one
nuclide (i.e. a system of
quarks) as hypothesized in
quantum thermodynamics.
Anything that passes across the boundary needs to be accounted for in
a proper transfer balance equation. Thermodynamics is largely about
such transfers.
Boundary sectors are of various characters: rigid, flexible, fixed,
moveable, actually restrictive, and fictive or not actually restrictive.
For example, in an engine, a fixed boundary sector means the piston is
locked at its position; then no pressure-volume work is done across it.
In that same engine, a moveable boundary allows the piston to move in
and out, permitting pressure-volume work. There is no restrictive
boundary sector for the whole earth including its atmosphere, and so
roughly speaking, no pressure-volume work is done on or by the whole
earth system. Such a system is sometimes said to be diabatically heated
or cooled by radiation.
[86][87]
Thermodynamics distinguishes classes of systems by their boundary sectors.
- An open system has a boundary sector that is permeable to
matter; such a sector is usually permeable also to energy, but the
energy that passes cannot in general be uniquely sorted into heat and
work components. Open system boundaries may be either actually
restrictive, or else non-restrictive.
- A closed system has no boundary sector that is permeable to
matter, but in general its boundary is permeable to energy. For closed
systems, boundaries are totally prohibitive of matter transfer.
- An adiabatically isolated system has only adiabatic boundary
sectors. Energy can be transferred as work, but transfers of matter and
of energy as heat are prohibited.
- A purely diathermically isolated system has only boundary
sectors permeable only to heat; it is sometimes said to be adynamically
isolated and closed to matter transfer. A process in which no work is
transferred is sometimes called adynamic.[88]
- An isolated system has only isolating boundary sectors. Nothing can be transferred into or out of it.
Engineering and natural processes are often described as composites
of many different component simple systems, sometimes with unchanging or
changing partitions between them. A change of partition is an example
of a
thermodynamic operation.
States and processes
There are three fundamental kinds of entity in thermodynamics, states
of a system, processes of a system, and thermodynamic operations. This
allows three fundamental approaches to thermodynamic reasoning, that in
terms of states of thermodynamic equilibrium of a system, and that in
terms of time-invariant processes of a system, and that in terms of
cyclic processes of a system.
The approach through states of thermodynamic equilibrium of a system
requires a full account of the state of the system as well as a notion
of process from one state to another of a system, but may require only
an idealized or partial account of the state of the surroundings of the
system or of other systems.
The method of description in terms of states of thermodynamic
equilibrium has limitations. For example, processes in a region of
turbulent flow, or in a burning gas mixture, or in a
Knudsen gas may be beyond "the province of thermodynamics".
[89][90][91]
This problem can sometimes be circumvented through the method of
description in terms of cyclic or of time-invariant flow processes. This
is part of the reason why the founders of thermodynamics often
preferred the cyclic process description.
Approaches through processes of time-invariant flow of a system are used for some studies. Some processes, for example
Joule-Thomson expansion,
are studied through steady-flow experiments, but can be accounted for
by distinguishing the steady bulk flow kinetic energy from the internal
energy, and thus can be regarded as within the scope of classical
thermodynamics defined in terms of equilibrium states or of cyclic
processes.
[36][92] Other flow processes, for example
thermoelectric effects,
are essentially defined by the presence of differential flows or
diffusion so that they cannot be adequately accounted for in terms of
equilibrium states or classical cyclic processes.
[93][94]
The notion of a cyclic process does not require a full account of the
state of the system, but does require a full account of how the process
occasions transfers of matter and energy between the principal system
(which is often called the
working body) and its surroundings,
which must include at least two heat reservoirs at different known and
fixed temperatures, one hotter than the principal system and the other
colder than it, as well as a reservoir that can receive energy from the
system as work and can do work on the system. The reservoirs can
alternatively be regarded as auxiliary idealized component systems,
alongside the principal system. Thus an account in terms of cyclic
processes requires at least four contributory component systems. The
independent variables of this account are the amounts of energy that
enter and leave the idealized auxiliary systems. In this kind of
account, the working body is often regarded as a "black box",
[95]
and its own state is not specified. In this approach, the notion of a
properly numerical scale of empirical temperature is a presupposition of
thermodynamics, not a notion constructed by or derived from it.
Account in terms of states of thermodynamic equilibrium
When a system is at thermodynamic equilibrium under a given set of
conditions of its surroundings, it is said to be in a definite
thermodynamic state, which is fully described by its state variables.
If a system is simple as defined above, and is in thermodynamic
equilibrium, and is not subject to an externally imposed force field,
such as gravity, electricity, or magnetism, then it is homogeneous, that
is say, spatially uniform in all respects.
[96]
In a sense, a homogeneous system can be regarded as spatially zero-dimensional, because it has no spatial variation.
If a system in thermodynamic equilibrium is homogeneous, then its
state can be described by a few physical variables, which are mostly
classifiable as
intensive variables and
extensive variables.
[8][27][61][97][98]
An intensive variable is one that is unchanged with the thermodynamic operation of
scaling of a system.
An extensive variable is one that simply scales with the scaling of a
system, without the further requirement used just below here, of
additivity even when there is inhomogeneity of the added systems.
Examples of extensive thermodynamic variables are total mass and
total volume. Under the above definition, entropy is also regarded as an
extensive variable. Examples of intensive thermodynamic variables are
temperature,
pressure, and chemical concentration; intensive thermodynamic variables
are defined at each spatial point and each instant of time in a system.
Physical macroscopic variables can be mechanical, material, or thermal.
[27] Temperature is a thermal variable; according to Guggenheim, "the most important conception in thermodynamics is temperature."
[8]
Intensive variables have the property that if any number of systems,
each in its own separate homogeneous thermodynamic equilibrium state,
all with the same respective values of all of their intensive variables,
regardless of the values of their extensive variables, are laid
contiguously with no partition between them, so as to form a new system,
then the values of the intensive variables of the new system are the
same as those of the separate constituent systems. Such a composite
system is in a homogeneous thermodynamic equilibrium. Examples of
intensive variables are temperature, chemical concentration, pressure,
density of mass, density of internal energy, and, when it can be
properly defined, density of entropy.
[99] In other words, intensive variables are not altered by the thermodynamic operation of scaling.
For the immediately present account just below, an alternative
definition of extensive variables is considered, that requires that if
any number of systems, regardless of their possible separate
thermodynamic equilibrium or non-equilibrium states or intensive
variables, are laid side by side with no partition between them so as to
form a new system, then the values of the extensive variables of the
new system are the sums of the values of the respective extensive
variables of the individual separate constituent systems. Obviously,
there is no reason to expect such a composite system to be in a
homogeneous thermodynamic equilibrium. Examples of extensive variables
in this alternative definition are mass, volume, and internal energy.
They depend on the total quantity of mass in the system.
[100]
In other words, although extensive variables scale with the system
under the thermodynamic operation of scaling, nevertheless the present
alternative definition of an extensive variable requires more than this:
it requires also its additivity regardless of the inhomogeneity (or
equality or inequality of the values of the intensive variables) of the
component systems.
Though, when it can be properly defined, density of entropy is an
intensive variable, for inhomogeneous systems, entropy itself does not
fit into this alternative classification of state variables.
[101][102]
The reason is that entropy is a property of a system as a whole, and
not necessarily related simply to its constituents separately. It is
true that for any number of systems each in its own separate homogeneous
thermodynamic equilibrium, all with the same values of intensive
variables, removal of the partitions between the separate systems
results in a composite homogeneous system in thermodynamic equilibrium,
with all the values of its intensive variables the same as those of the
constituent systems, and it is reservedly or conditionally true that the
entropy of such a restrictively defined composite system is the sum of
the entropies of the constituent systems. But if the constituent systems
do not satisfy these restrictive conditions, the entropy of a composite
system cannot be expected to be the sum of the entropies of the
constituent systems, because the entropy is a property of the composite
system as a whole. Therefore, though under these restrictive
reservations, entropy satisfies some requirements for extensivity
defined just above, entropy in general does not fit the immediately
present definition of an extensive variable.
Being neither an intensive variable nor an extensive variable
according to the immediately present definition, entropy is thus a
stand-out variable, because it is a state variable of a system as a
whole.
[101]
A non-equilibrium system can have a very inhomogeneous dynamical
structure. This is one reason for distinguishing the study of
equilibrium thermodynamics from the study of non-equilibrium
thermodynamics.
The physical reason for the existence of extensive variables is the
time-invariance of volume in a given inertial reference frame, and the
strictly local conservation of mass, momentum, angular momentum, and
energy. As noted by Gibbs, entropy is unlike energy and mass, because it
is not locally conserved.
[101] The stand-out quantity entropy is never conserved in real physical processes; all real physical processes are irreversible.
[103] The motion of planets seems reversible on a short time scale (millions of years), but their motion, according to
Newton's laws, is mathematically an example of
deterministic chaos.
Eventually a planet suffers an unpredictable collision with an object
from its surroundings, outer space in this case, and consequently its
future course is radically unpredictable. Theoretically this can be
expressed by saying that every natural process dissipates some
information from the predictable part of its activity into the
unpredictable part. The predictable part is expressed in the generalized
mechanical variables, and the unpredictable part in heat.
Other state variables can be regarded as conditionally 'extensive'
subject to reservation as above, but not extensive as defined above.
Examples are the Gibbs free energy, the Helmholtz free energy, and the
enthalpy. Consequently, just because for some systems under particular
conditions of their surroundings such state variables are conditionally
conjugate to intensive variables, such conjugacy does not make such
state variables extensive as defined above. This is another reason for
distinguishing the study of equilibrium thermodynamics from the study of
non-equilibrium thermodynamics. In another way of thinking, this
explains why heat is to be regarded as a quantity that refers to a
process and not to a state of a system.
A system with no internal partitions, and in thermodynamic
equilibrium, can be inhomogeneous in the following respect: it can
consist of several so-called 'phases', each homogeneous in itself, in
immediate contiguity with other phases of the system, but
distinguishable by their having various respectively different physical
characters, with discontinuity of intensive variables at the boundaries
between the phases; a mixture of different chemical species is
considered homogeneous for this purpose if it is physically homogeneous.
[104]
For example, a vessel can contain a system consisting of water vapour
overlying liquid water; then there is a vapour phase and a liquid phase,
each homogeneous in itself, but still in thermodynamic equilibrium with
the other phase. For the immediately present account, systems with
multiple phases are not considered, though for many thermodynamic
questions, multiphase systems are important.
Equation of state
The macroscopic variables of a thermodynamic system in thermodynamic
equilibrium, in which temperature is well defined, can be related to one
another through
equations of state or characteristic equations.
[23][24][25][26] They express the
constitutive
peculiarities of the material of the system. The equation of state must
comply with some thermodynamic constraints, but cannot be derived from
the general principles of thermodynamics alone.
Thermodynamic processes between states of thermodynamic equilibrium
A
thermodynamic process
is defined by changes of state internal to the system of interest,
combined with transfers of matter and energy to and from the
surroundings of the system or to and from other systems. A system is
demarcated from its surroundings or from other systems by partitions
that more or less separate them, and may move as a piston to change the
volume of the system and thus transfer work.
Dependent and independent variables for a process
A process is described by changes in values of state variables of
systems or by quantities of exchange of matter and energy between
systems and surroundings. The change must be specified in terms of
prescribed variables. The choice of which variables are to be used is
made in advance of consideration of the course of the process, and
cannot be changed. Certain of the variables chosen in advance are called
the independent variables.
[105]
From changes in independent variables may be derived changes in other
variables called dependent variables. For example a process may occur at
constant pressure with pressure prescribed as an independent variable,
and temperature changed as another independent variable, and then
changes in volume are considered as dependent. Careful attention to this
principle is necessary in thermodynamics.
[106][107]
Changes of state of a system
In the approach through equilibrium states of the system, a process can be described in two main ways.
In one way, the system is considered to be connected to the
surroundings by some kind of more or less separating partition, and
allowed to reach equilibrium with the surroundings with that partition
in place. Then, while the separative character of the partition is kept
unchanged, the conditions of the surroundings are changed, and exert
their influence on the system again through the separating partition, or
the partition is moved so as to change the volume of the system; and a
new equilibrium is reached. For example, a system is allowed to reach
equilibrium with a heat bath at one temperature; then the temperature of
the heat bath is changed and the system is allowed to reach a new
equilibrium; if the partition allows conduction of heat, the new
equilibrium is different from the old equilibrium.
In the other way, several systems are connected to one another by
various kinds of more or less separating partitions, and to reach
equilibrium with each other, with those partitions in place. In this
way, one may speak of a 'compound system'. Then one or more partitions
is removed or changed in its separative properties or moved, and a new
equilibrium is reached. The Joule-Thomson experiment is an example of
this; a tube of gas is separated from another tube by a porous
partition; the volume available in each of the tubes is determined by
respective pistons; equilibrium is established with an initial set of
volumes; the volumes are changed and a new equilibrium is established.
[108][109][110][111][112] Another example is in separation and mixing of gases, with use of chemically semi-permeable membranes.
[113]
Commonly considered thermodynamic processes
It is often convenient to study a thermodynamic process in which a
single variable, such as temperature, pressure, or volume, etc., is held
fixed. Furthermore, it is useful to group these processes into pairs,
in which each variable held constant is one member of a
conjugate pair.
Several commonly studied thermodynamic processes are:
- Isobaric process: occurs at constant pressure
- Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
- Isothermal process: occurs at a constant temperature
- Adiabatic process: occurs without loss or gain of energy as heat
- Isentropic process: a reversible adiabatic process occurs at a constant entropy,
but is a fictional idealization. Conceptually it is possible to
actually physically conduct a process that keeps the entropy of the
system constant, allowing systematically controlled removal of heat, by
conduction to a cooler body, to compensate for entropy produced within
the system by irreversible work done on the system. Such isentropic
conduct of a process seems called for when the entropy of the system is
considered as an independent variable, as for example when the internal
energy is considered as a function of the entropy and volume of the
system, the natural variables of the internal energy as studied by Gibbs.
- Isenthalpic process: occurs at a constant enthalpy
- Isolated process: no matter or energy (neither as work nor as heat) is transferred into or out of the system
It is sometimes of interest to study a process in which several
variables are controlled, subject to some specified constraint. In a
system in which a chemical reaction can occur, for example, in which the
pressure and temperature can affect the equilibrium composition, a
process might occur in which temperature is held constant but pressure
is slowly altered, just so that chemical equilibrium is maintained all
the way. There is a corresponding process at constant temperature in
which the final pressure is the same but is reached by a rapid jump.
Then it can be shown that the volume change resulting from the rapid
jump process is smaller than that from the slow equilibrium process.
[114] The work transferred differs between the two processes.
Account in terms of cyclic processes
A
cyclic process[20]
is a process that can be repeated indefinitely often without changing
the final state of the system in which the process occurs. The only
traces of the effects of a cyclic process are to be found in the
surroundings of the system or in other systems. This is the kind of
process that concerned early thermodynamicists such as
Carnot, and in terms of which
Kelvin defined absolute temperature,
[115][116] before the use of the quantity of entropy by
Rankine[117] and its clear identification by
Clausius.
[118]
For some systems, for example with some plastic working substances,
cyclic processes are practically nearly unfeasible because the working
substance undergoes practically irreversible changes.
[60] This is why mechanical devices are lubricated with oil and one of the reasons why electrical devices are often useful.
A cyclic process of a system requires in its surroundings at least
two heat reservoirs at different temperatures, one at a higher
temperature that supplies heat to the system, the other at a lower
temperature that accepts heat from the system. The early work on
thermodynamics tended to use the cyclic process approach, because it was
interested in machines that converted some of the heat from the
surroundings into mechanical power delivered to the surroundings,
without too much concern about the internal workings of the machine.
Such a machine, while receiving an amount of heat from a higher
temperature reservoir, always needs a lower temperature reservoir that
accepts some lesser amount of heat. The difference in amounts of heat is
equal to the amount of heat converted to work.
[83][119]
Later, the internal workings of a system became of interest, and they
are described by the states of the system. Nowadays, instead of arguing
in terms of cyclic processes, some writers are inclined to derive the
concept of absolute temperature from the concept of entropy, a variable
of state.
Instrumentation
There are two types of
thermodynamic instruments, the
meter and the
reservoir. A thermodynamic meter is any device that measures any parameter of a
thermodynamic system.
In some cases, the thermodynamic parameter is actually defined in terms
of an idealized measuring instrument. For example, the
zeroth law
states that if two bodies are in thermal equilibrium with a third body,
they are also in thermal equilibrium with each other. This principle,
as noted by
James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized
thermometer is a sample of an ideal gas at constant pressure. From the
ideal gas law PV=nRT,
the volume of such a sample can be used as an indicator of temperature;
in this manner it defines temperature. Although pressure is defined
mechanically, a pressure-measuring device, called a
barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A
calorimeter is a device that measures and define the internal energy of a system.
A thermodynamic reservoir is a system so large that it does not
appreciably alter its state parameters when brought into contact with
the test system. It is used to impose a particular value of a state
parameter upon the system. For example, a pressure reservoir is a system
at a particular pressure, which imposes that pressure upon any test
system that it is mechanically connected to. The Earth's atmosphere is
often used as a pressure reservoir.
Conjugate variables
A central concept of thermodynamics is that of
energy. By the
First Law,
the total energy of a system and its surroundings is conserved. Energy
may be transferred into a system by heating, compression, or addition of
matter, and extracted from a system by cooling, expansion, or
extraction of matter. In
mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some
thermodynamic system,
the second being akin to the resulting "displacement," and the product
of the two equalling the amount of energy transferred. The common
conjugate variables are:
Potentials
Thermodynamic potentials
are different quantitative measures of the stored energy in a system.
Potentials are used to measure energy changes in systems as they evolve
from an initial state to a final state. The potential used depends on
the constraints of the system, such as constant temperature or pressure.
For example, the Helmholtz and Gibbs energies are the energies
available in a system to do useful work when the temperature and volume
or the pressure and temperature are fixed, respectively.
The five most well known potentials are:
where
is the
temperature,
the
entropy,
the
pressure,
the
volume,
the
chemical potential,
the number of particles in the system, and
is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance
equation applied to a thermodynamic system. Other thermodynamic
potentials can also be obtained through
Legendre transformation.
Axiomatics
Most accounts of thermodynamics presuppose the law of
conservation of mass, sometimes with,
[120] and sometimes without,
[121][122][123] explicit mention. Particular attention is paid to the law in accounts of non-equilibrium thermodynamics.
[124][125] One statement of this law is "The total mass of a closed system remains constant."
[9] Another statement of it is "In a chemical reaction, matter is neither created nor destroyed."
[126]
Implied in this is that matter and energy are not considered to be
interconverted in such accounts. The full generality of the law of
conservation of energy is thus not used in such accounts.
In 1909,
Constantin Carathéodory presented
[48] a purely mathematical axiomatic formulation, a description often referred to as
geometrical thermodynamics, and sometimes said to take the "mechanical approach"
[78] to thermodynamics. The Carathéodory formulation is restricted to equilibrium thermodynamics and does not attempt to deal with
non-equilibrium thermodynamics, forces that act at a distance on the system, or surface tension effects.
[127]
Moreover, Carathéodory's formulation does not deal with materials like
water near 4 °C, which have a density extremum as a function of
temperature at constant pressure.
[128][129] Carathéodory used the
law of conservation of energy
as an axiom from which, along with the contents of the zeroth law, and
some other assumptions including his own version of the second law, he
derived the first law of thermodynamics.
[130] Consequently one might also describe Carathėodory's work as lying in the field of
energetics,
[131] which is broader than thermodynamics. Carathéodory presupposed the law of conservation of mass without explicit mention of it.
Since the time of Carathėodory, other influential axiomatic
formulations of thermodynamics have appeared, which like Carathéodory's,
use their own respective axioms, different from the usual statements of
the four laws, to derive the four usually stated laws.
[132][133][134]
Many axiomatic developments assume the existence of states of
thermodynamic equilibrium and of states of thermal equilibrium. States
of thermodynamic equilibrium of compound systems allow their component
simple systems to exchange heat and matter and to do work on each other
on their way to overall joint equilibrium. Thermal equilibrium allows
them only to exchange heat. The physical properties of glass depend on
its history of being heated and cooled and, strictly speaking, glass is
not in thermodynamic equilibrium.
[63]
According to
Herbert Callen's
widely cited 1985 text on thermodynamics: "An essential prerequisite
for the measurability of energy is the existence of walls that do not
permit transfer of energy in the form of heat.".
[135] According to
Werner Heisenberg's mature and careful examination of the basic concepts of physics, the theory of heat has a self-standing place.
[136]
From the viewpoint of the axiomatist, there are several different
ways of thinking about heat, temperature, and the second law of
thermodynamics. The Clausius way rests on the empirical fact that heat
is conducted always down, never up, a temperature gradient. The Kelvin
way is to assert the empirical fact that conversion of heat into work by
cyclic processes is never perfectly efficient. A more mathematical way
is to assert the existence of a function of state called the entropy
that tells whether a hypothesized process occurs spontaneously in
nature. A more abstract way is that of Carathéodory that in effect
asserts the irreversibility of some adiabatic processes. For these
different ways, there are respective corresponding different ways of
viewing heat and temperature.
The Clausius–Kelvin–Planck way This way prefers ideas close to
the empirical origins of thermodynamics. It presupposes transfer of
energy as heat, and empirical temperature as a scalar function of state.
According to Gislason and Craig (2005): "Most thermodynamic data come
from calorimetry..."
[137] According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories."
[138]
In this approach, what is often currently called the zeroth law of
thermodynamics is deduced as a simple consequence of the presupposition
of the nature of heat and empirical temperature, but it is not named as a
numbered law of thermodynamics. Planck attributed this point of view to
Clausius, Kelvin, and Maxwell. Planck wrote (on page 90 of the seventh
edition, dated 1922, of his treatise) that he thought that no proof of
the second law of thermodynamics could ever work that was not based on
the impossibility of a perpetual motion machine of the second kind. In
that treatise, Planck makes no mention of the 1909 Carathéodory way,
which was well known by 1922. Planck for himself chose a version of what
is just above called the Kelvin way.
[139]
The development by Truesdell and Bharatha (1977) is so constructed that
it can deal naturally with cases like that of water near 4 °C.
[133]
The way that assumes the existence of entropy as a function of state
This way also presupposes transfer of energy as heat, and it
presupposes the usually stated form of the zeroth law of thermodynamics,
and from these two it deduces the existence of empirical temperature.
Then from the existence of entropy it deduces the existence of absolute
thermodynamic temperature.
[8][132]
The Carathéodory way This way presupposes that the state of a
simple one-phase system is fully specifiable by just one more state
variable than the known exhaustive list of mechanical variables of
state. It does not explicitly name empirical temperature, but speaks of
the one-dimensional "non-deformation coordinate". This satisfies the
definition of an empirical temperature, that lies on a one-dimensional
manifold. The Carathéodory way needs to assume moreover that the
one-dimensional manifold has a definite sense, which determines the
direction of irreversible adiabatic process, which is effectively
assuming that heat is conducted from hot to cold. This way presupposes
the often currently stated version of the zeroth law, but does not
actually name it as one of its axioms.
[127]
According to one author, Carathéodory's principle, which is his version
of the second law of thermodynamics, does not imply the increase of
entropy when work is done under adiabatic conditions (as was noted by
Planck
[140]).
Thus Carathéodory's way leaves unstated a further empirical fact that
is needed for a full expression of the second law of thermodynamics.
[141]
Scope of thermodynamics
Originally thermodynamics concerned material and radiative phenomena
that are experimentally reproducible. For example, a state of
thermodynamic equilibrium is a steady state reached after a system has
aged so that it no longer changes with the passage of time. But more
than that, for thermodynamics, a system, defined by its being prepared
in a certain way must, consequent on every particular occasion of
preparation, upon aging, reach one and the same eventual state of
thermodynamic equilibrium, entirely determined by the way of
preparation. Such reproducibility is because the systems consist of so
many molecules that the molecular variations between particular
occasions of preparation have negligible or scarcely discernable effects
on the macroscopic variables that are used in thermodynamic
descriptions. This led to Boltzmann's discovery that entropy had a
statistical or probabilistic nature. Probabilistic and statistical
explanations arise from the experimental reproducibility of the
phenomena.
[142]
Gradually, the laws of thermodynamics came to be used to explain
phenomena that occur outside the experimental laboratory. For example,
phenomena on the scale of the earth's atmosphere cannot be reproduced in
a laboratory experiment. But
processes in the atmosphere can be modeled by use of thermodynamic ideas, extended well beyond the scope of laboratory equilibrium thermodynamics.
[143][144][145]
A parcel of air can, near enough for many studies, be considered as a
closed thermodynamic system, one that is allowed to move over
significant distances. The pressure exerted by the surrounding air on
the lower face of a parcel of air may differ from that on its upper
face. If this results in rising of the parcel of air, it can be
considered to have gained potential energy as a result of work being
done on it by the combined surrounding air below and above it. As it
rises, such a parcel usually expands because the pressure is lower at
the higher altitudes that it reaches. In that way, the rising parcel
also does work on the surrounding atmosphere. For many studies, such a
parcel can be considered nearly to neither gain nor lose energy by heat
conduction to its surrounding atmosphere, and its rise is rapid enough
to leave negligible time for it to gain or lose heat by radiation;
consequently the rising of the parcel is near enough adiabatic. Thus the
adiabatic gas law
accounts for its internal state variables, provided that there is no
precipitation into water droplets, no evaporation of water droplets, and
no sublimation in the process. More precisely, the rising of the parcel
is likely to occasion friction and turbulence, so that some potential
and some kinetic energy of bulk converts into internal energy of air
considered as effectively stationary. Friction and turbulence thus
oppose the rising of the parcel.
[146][147]
Applied fields
Electromagnetism
From Wikipedia, the free encyclopedia
Electromagnetism |
|
|
|
|
|
|
|
|
|
Electromagnetism, or the
electromagnetic force is one of the four
fundamental interactions in
nature, the other three being the
strong interaction, the
weak interaction, and
gravitation. This
force is described by
electromagnetic fields, and has innumerable physical instances including the interaction of
electrically charged particles and the interaction of uncharged magnetic force fields with electrical conductors.
The word
electromagnetism is a compound form of two
Greek terms, ἢλεκτρον,
ēlektron, "
amber", and μαγνήτης,
magnētēs, "
magnet". The
science of electromagnetic phenomena is defined in terms of the electromagnetic force, sometimes called the
Lorentz force, which includes both
electricity and
magnetism as elements of one phenomenon.
During the
quark epoch, the
electroweak force split into the electromagnetic and
weak force.
The electromagnetic force plays a major role in determining the
internal properties of most objects encountered in daily life. Ordinary
matter takes its form as a result of
intermolecular forces between individual
molecules in matter.
Electrons are bound by electromagnetic wave mechanics into orbitals around
atomic nuclei to form
atoms, which are the building blocks of molecules. This governs the processes involved in
chemistry, which arise from interactions between the
electrons
of neighboring atoms, which are in turn determined by the interaction
between electromagnetic force and the momentum of the electrons.
There are numerous
mathematical descriptions of the electromagnetic field. In
classical electrodynamics,
electric fields are described as
electric potential and
electric current in
Ohm's law,
magnetic fields are associated with
electromagnetic induction and
magnetism, and
Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents.
The theoretical implications of electromagnetism, in particular the
establishment of the speed of light based on properties of the "medium"
of propagation (
permeability and
permittivity), led to the development of
special relativity by
Albert Einstein in 1905.
History of the theory
Originally electricity and magnetism were thought of as two separate
forces. This view changed, however, with the publication of
James Clerk Maxwell's 1873
Treatise on Electricity and Magnetism
in which the interactions of positive and negative charges were shown
to be regulated by one force. There are four main effects resulting from
these interactions, all of which have been clearly demonstrated by
experiments:
- Electric charges attract or repel one another with a force inversely
proportional to the square of the distance between them: unlike charges
attract, like ones repel.
- Magnetic poles (or states of polarization at individual points)
attract or repel one another in a similar way and always come in pairs:
every north pole is yoked to a south pole.
- An electric current in a wire creates a circular magnetic field
around the wire, its direction (clockwise or counter-clockwise)
depending on that of the current.
- A current is induced in a loop of wire when it is moved towards or
away from a magnetic field, or a magnet is moved towards or away from
it, the direction of current depending on that of the movement.
While preparing for an evening lecture on 21 April 1820,
Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a
compass needle deflected from
magnetic north
when the electric current from the battery he was using was switched on
and off. This deflection convinced him that magnetic fields radiate
from all sides of a wire carrying an electric current, just as light and
heat do, and that it confirmed a direct relationship between
electricity and magnetism.
At the time of discovery, Ørsted did not suggest any satisfactory
explanation of the phenomenon, nor did he try to represent the
phenomenon in a mathematical framework. However, three months later he
began more intensive investigations. Soon thereafter he published his
findings, proving that an electric current produces a magnetic field as
it flows through a wire. The
CGS unit of
magnetic induction (
oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in
electrodynamics. They influenced French physicist
André-Marie Ampère's
developments of a single mathematical form to represent the magnetic
forces between current-carrying conductors. Ørsted's discovery also
represented a major step toward a unified concept of energy.
This unification, which was observed by
Michael Faraday, extended by
James Clerk Maxwell, and partially reformulated by
Oliver Heaviside and
Heinrich Hertz, is one of the key accomplishments of 19th century
mathematical physics. It had far-reaching consequences, one of which was the understanding of the nature of
light. Unlike what was proposed in Electromagnetism, light and other
electromagnetic waves are at the present seen as taking the form of
quantized, self-propagating
oscillatory electromagnetic field disturbances which have been called
photons. Different
frequencies of oscillation give rise to the different forms of
electromagnetic radiation, from
radio waves at the lowest frequencies, to visible light at intermediate frequencies, to
gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relation between electricity and magnetism. In 1802
Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle by electrostatic charges. Actually, no
galvanic
current existed in the setup and hence no electromagnetism was present.
An account of the discovery was published in 1802 in an Italian
newspaper, but it was largely overlooked by the contemporary scientific
community.
[1]
Overview
The electromagnetic force is one of the four known
fundamental forces. The other fundamental forces are:
All other forces (e.g.,
friction) are ultimately derived from these fundamental forces and momentum carried by the movement of particles.
The electromagnetic force is the one responsible for practically all
the phenomena one encounters in daily life above the nuclear scale, with
the exception of gravity. Roughly speaking, all the forces involved in
interactions between
atoms can be explained by the electromagnetic force acting on the electrically charged
atomic nuclei and
electrons
inside and around the atoms, together with how these particles carry
momentum by their movement. This includes the forces we experience in
"pushing" or "pulling" ordinary material objects, which come from the
intermolecular forces between the individual
molecules in our bodies and those in the objects. It also includes all forms of
chemical phenomena.
A necessary part of understanding the intra-atomic to intermolecular
forces is the effective force generated by the momentum of the
electrons' movement, and that electrons move between interacting atoms,
carrying momentum with them. As a collection of electrons becomes more
confined, their minimum momentum necessarily increases due to the
Pauli exclusion principle.
The behaviour of matter at the molecular scale including its density is
determined by the balance between the electromagnetic force and the
force generated by the exchange of momentum carried by the electrons
themselves.
Classical electrodynamics
The scientist
William Gilbert proposed, in his
De Magnete
(1600), that electricity and magnetism, while both capable of causing
attraction and repulsion of objects, were distinct effects. Mariners had
noticed that lightning strikes had the ability to disturb a compass
needle, but the link between lightning and electricity was not confirmed
until
Benjamin Franklin's
proposed experiments in 1752. One of the first to discover and publish a
link between man-made electric current and magnetism was
Romagnosi, who in 1802 noticed that connecting a wire across a
voltaic pile deflected a nearby
compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment.
[2] Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation.
A theory of electromagnetism, known as
classical electromagnetism, was developed by various
physicists over the course of the 19th century, culminating in the work of
James Clerk Maxwell,
who unified the preceding developments into a single theory and
discovered the electromagnetic nature of light. In classical
electromagnetism, the electromagnetic field obeys a set of equations
known as
Maxwell's equations, and the electromagnetic force is given by the
Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with
classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the
speed of light in a vacuum is a universal constant, dependent only on the
electrical permittivity and
magnetic permeability of
free space. This violates
Galilean invariance,
a long-standing cornerstone of classical mechanics. One way to
reconcile the two theories (electromagnetism and classical mechanics) is
to assume the existence of a
luminiferous aether
through which the light propagates. However, subsequent experimental
efforts failed to detect the presence of the aether. After important
contributions of
Hendrik Lorentz and
Henri Poincaré,
in 1905, Albert Einstein solved the problem with the introduction of
special relativity, which replaces classical kinematics with a new
theory of kinematics that is compatible with classical electromagnetism.
(For more information, see
History of special relativity.)
In addition, relativity theory shows that in moving frames of
reference a magnetic field transforms to a field with a nonzero electric
component and vice versa; thus firmly showing that they are two sides
of the same coin, and thus the term "electromagnetism". (For more
information, see
Classical electromagnetism and special relativity and
Covariant formulation of classical electromagnetism.
Photoelectric effect
In another paper published in that same year, Albert Einstein
undermined the very foundations of classical electromagnetism. In his
theory of the
photoelectric effect (for which he won the Nobel prize for physics) and inspired by the idea of
Max Planck's "quanta", he posited that light could exist in discrete particle-like quantities as well, which later came to be known as
photons. Einstein's theory of the photoelectric effect extended the insights that appeared in the solution of the
ultraviolet catastrophe presented by
Max Planck
in 1900. In his work, Planck showed that hot objects emit
electromagnetic radiation in discrete packets ("quanta"), which leads to
a finite total
energy emitted as
black body radiation.
Both of these results were in direct contradiction with the classical
view of light as a continuous wave. Planck's and Einstein's theories
were progenitors of
quantum mechanics,
which, when formulated in 1925, necessitated the invention of a quantum
theory of electromagnetism. This theory, completed in the 1940s-1950s,
is known as
quantum electrodynamics (or "QED"), and, in situations where
perturbation theory is applicable, is one of the most accurate theories known to physics.
Quantities and units
Electromagnetic units are part of a system of electrical units
based primarily upon the magnetic properties of electric currents, the
fundamental SI unit being the ampere. The units are:
In the electromagnetic
cgs system, electric current is a fundamental quantity defined via
Ampère's law and takes the
permeability
as a dimensionless quantity (relative permeability) whose value in a
vacuum is unity. As a consequence, the square of the speed of light
appears explicitly in some of the equations interrelating quantities in
this system.
Formulas for physical laws of electromagnetism (such as
Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no
one-to-one correspondence
between electromagnetic units in SI and those in CGS, as is the case
for mechanical units. Furthermore, within CGS, there are several
plausible choices of electromagnetic units, leading to different unit
"sub-systems", including
Gaussian, "ESU", "EMU", and
Heaviside–Lorentz.
Among these choices, Gaussian units are the most common today, and in
fact the phrase "CGS units" is often used to refer specifically to
CGS-Gaussian units.
Electromagnetic phenomena
With the exception of
gravitation, electromagnetic phenomena as described by
quantum electrodynamics
(which includes classical electrodynamics as a limiting case) account
for almost all physical phenomena observable to the unaided human
senses, including
light and other
electromagnetic radiation, all of
chemistry, most of
mechanics (excepting gravitation), and, of course,
magnetism and
electricity.
Magnetic monopoles (and "Gilbert" dipoles) are not strictly
electromagnetic phenomena, since in standard electromagnetism, magnetic
fields are generated not by true "magnetic charge" but by currents.
There are, however,
condensed matter analogs of magnetic monopoles in exotic materials (
spin ice) created in the laboratory.
[4]
Electromagnetic induction
Electromagnetic Induction is the Induction of an
electromotive force in a
circuit by varying the magnetic flux linked with the circuit. The phenomenon was first investigated in 1830-31 by
Joseph Henry and
Michael Faraday, who discovered that when the
magnetic field around an electromagnet was increased and decreased, an
electric current should be detected by nearby
conductor.
A current can also be induced by constantly moving a permanent magnet
in and out of a coil of wire, or by constantly moving a conductor near a
stationary permanent
magnet. The induced electromotive force is proportional to the rate of change of the magnetic flux cutting across the circuit.
WAVES
This article is about waves in the scientific sense. For waves on the surface of the ocean or lakes, see
Wind wave. For other uses of wave or waves, see
Wave (disambiguation).
In
physics, a
wave is a disturbance or oscillation that travels through space and matter, accompanied by a transfer of
energy.
Wave motion transfers
energy
from one point to another, often with no permanent displacement of the
particles of the medium—that is, with little or no associated mass
transport. They consist, instead, of
oscillations
or vibrations around almost fixed locations. Waves are described by a
wave equation which sets out how the disturbance proceeds over time. The
mathematical form of this equation varies depending on the type of
wave.
There are two main types of waves.
Mechanical waves propagate through a medium, and the substance of this medium is deformed. The deformation reverses itself owing to
restoring forces
resulting from its deformation. For example, sound waves propagate via
air molecules colliding with their neighbors. When air molecules
collide, they also bounce away from each other (a restoring force). This
keeps the molecules from continuing to travel in the direction of the
wave.
The second main type of wave,
electromagnetic waves,
do not require a medium. Instead, they consist of periodic oscillations
of electrical and magnetic fields generated by charged particles, and
can therefore travel through a
vacuum. These types of waves vary in wavelength, and include
radio waves,
microwaves,
infrared radiation,
visible light,
ultraviolet radiation,
X-rays, and
gamma rays.
Further, the behavior of particles in
quantum mechanics are described by waves, and researchers believe that
gravitational waves also travel through space, although gravitational waves have never been directly detected.
A wave can be
transverse or
longitudinal
depending on the direction of its oscillation. Transverse waves occur
when a disturbance creates oscillations perpendicular (at right angles)
to the propagation (the direction of energy transfer). Longitudinal
waves occur when the oscillations are
parallel
to the direction of propagation. While mechanical waves can be both
transverse and longitudinal, all electromagnetic waves are transverse.
General features
A single, all-encompassing definition for the term
wave is not straightforward. A
vibration can be defined as a
back-and-forth
motion around a reference value. However, a vibration is not
necessarily a wave. An attempt to define the necessary and sufficient
characteristics that qualify a
phenomenon to be called a
wave results in a fuzzy border line.
The term
wave is often intuitively understood as referring to a
transport of spatial disturbances that are generally not accompanied by
a motion of the medium occupying this space as a whole. In a wave, the
energy of a
vibration is moving away from the source in the form of a disturbance within the surrounding medium (
Hall 1980, p. 8). However, this notion is problematic for a
standing wave (for example, a wave on a string), where
energy is moving in both directions equally, or for electromagnetic (e.g., light) waves in a
vacuum,
where the concept of medium does not apply and interaction with a
target is the key to wave detection and practical applications. There
are
water waves on the ocean surface;
gamma waves and
light waves emitted by the Sun;
microwaves used in microwave ovens and in
radar equipment;
radio waves broadcast by radio stations; and
sound waves generated by radio receivers, telephone handsets and living creatures (as voices), to mention only a few wave phenomena.
It may appear that the description of waves is closely related to
their physical origin for each specific instance of a wave process. For
example,
acoustics is distinguished from
optics in that sound waves are related to a mechanical rather than an electromagnetic wave transfer caused by
vibration. Concepts such as
mass,
momentum,
inertia, or
elasticity,
become therefore crucial in describing acoustic (as distinct from
optic) wave processes. This difference in origin introduces certain wave
characteristics particular to the properties of the medium involved.
For example, in the case of air:
vortices,
radiation pressure,
shock waves etc.; in the case of solids:
Rayleigh waves,
dispersion; and so on.
Other properties, however, although usually described in terms of
origin, may be generalized to all waves. For such reasons, wave theory
represents a particular branch of
physics that is concerned with the properties of wave processes independently of their physical origin.
[1]
For example, based on the mechanical origin of acoustic waves, a moving
disturbance in space–time can exist if and only if the medium involved
is neither infinitely stiff nor infinitely pliable. If all the parts
making up a medium were rigidly
bound, then they would all
vibrate as one, with no delay in the transmission of the vibration and
therefore no wave motion. On the other hand, if all the parts were
independent, then there would not be any transmission of the vibration
and again, no wave motion. Although the above statements are meaningless
in the case of waves that do not require a medium, they reveal a
characteristic that is relevant to all waves regardless of origin:
within a wave, the
phase
of a vibration (that is, its position within the vibration cycle) is
different for adjacent points in space because the vibration reaches
these points at different times.
Similarly, wave processes revealed from the study of waves other than
sound waves can be significant to the understanding of sound phenomena.
A relevant example is
Thomas Young's principle of interference (Young, 1802, in
Hunt 1992, p. 132). This principle was first introduced in Young's study of
light and, within some specific contexts (for example,
scattering of sound by sound), is still a researched area in the study of sound.
Mathematical description of one-dimensional waves
Wave equation
Consider a traveling
transverse wave (which may be a
pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling
Wavelength
λ, can be measured between any two corresponding points on a waveform
- in the direction in space. E.g., let the positive direction be to the right, and the negative direction be to the left.
- with constant amplitude
- with constant velocity , where is
- with constant waveform, or shape
This wave can then be described by the two-dimensional functions
- (waveform traveling to the right)
- (waveform traveling to the left)
or, more generally, by
d'Alembert's formula:
[3]
representing two component waveforms
and
traveling through the medium in opposite directions. A generalized representation of this wave can be obtained
[4] as the
partial differential equation
General solutions are based upon
Duhamel's principle.
[5]
Wave forms
The form or shape of
F in
d'Alembert's formula involves the argument
x − vt. Constant values of this argument correspond to constant values of
F, and these constant values occur if
x increases at the same rate that
vt increases. That is, the wave shaped like the function
F will move in the positive
x-direction at velocity
v (and
G will propagate at the same speed in the negative
x-direction).
[6]
In the case of a periodic function
F with period
λ, that is,
F(
x + λ −
vt) =
F(
x −
vt), the periodicity of
F in space means that a snapshot of the wave at a given time
t finds the wave varying periodically in space with period
λ (the
wavelength of the wave). In a similar fashion, this periodicity of
F implies a periodicity in time as well:
F(
x −
v(t + T)) =
F(
x −
vt) provided
vT =
λ, so an observation of the wave at a fixed location
x finds the wave undulating periodically in time with period
T = λ/
v.
[7]
Amplitude and modulation
Illustration of the
envelope (the slowly varying red curve) of an amplitude-modulated wave. The fast varying blue curve is the
carrier wave, which is being modulated.
The amplitude of a wave may be constant (in which case the wave is a
c.w. or
continuous wave), or may be
modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the
envelope of the wave. Mathematically, the
modulated wave can be written in the form:
[8][9][10]
where
is the amplitude envelope of the wave,
is the
wavenumber and
is the
phase. If the
group velocity (see below) is wavelength-independent, this equation can be simplified as:
[11]
showing that the envelope moves with the group velocity and retains
its shape. Otherwise, in cases where the group velocity varies with
wavelength, the pulse shape changes in a manner often described using an
envelope equation.
[11][12]
Phase velocity and group velocity
There are two velocities that are associated with waves, the
phase velocity and the
group velocity.
To understand them, one must consider several types of waveform. For
simplification, examination is restricted to one dimension.
This shows a wave with the Group velocity and Phase velocity going in different directions.
The most basic wave (a form of
plane wave) may be expressed in the form:
which can be related to the usual sine and cosine forms using
Euler's formula. Rewriting the argument,
, makes clear that this expression describes a vibration of wavelength
traveling in the
x-direction with a constant
phase velocity .
[13]
The other type of wave to be considered is one with localized structure described by an
envelope, which may be expressed mathematically as, for example:
where now
A(k1) (the integral is the inverse fourier transform of A(k1)) is a function exhibiting a sharp peak in a region of wave vectors Δ
k surrounding the point
k1 =
k. In exponential form:
with
Ao the magnitude of
A. For example, a common choice for
Ao is a
Gaussian wave packet:
[14]
where σ determines the spread of
k1-values about
k, and
N is the amplitude of the wave.
The exponential function inside the integral for ψ oscillates rapidly with its argument, say φ(
k1), and where it varies rapidly, the exponentials cancel each other out,
interfere destructively, contributing little to ψ.
[13]
However, an exception occurs at the location where the argument φ of
the exponential varies slowly. (This observation is the basis for the
method of
stationary phase for evaluation of such integrals.
[15]) The condition for φ to vary slowly is that its rate of change with
k1 be small; this rate of variation is:
[13]
where the evaluation is made at
k1 =
k because
A(k1) is centered there. This result shows that the position
x where the phase changes slowly, the position where ψ is appreciable, moves with time at a speed called the
group velocity:
The group velocity therefore depends upon the
dispersion relation connecting ω and
k. For example, in quantum mechanics the energy of a particle represented as a wave packet is
E = ħω = (ħ
k)
2/(2
m). Consequently, for that wave situation, the group velocity is
showing that the velocity of a localized particle in quantum mechanics is its group velocity.
[13] Because the group velocity varies with
k, the shape of the wave packet broadens with time, and the particle becomes less localized.
[16]
In other words, the velocity of the constituent waves of the wave
packet travel at a rate that varies with their wavelength, so some move
faster than others, and they cannot maintain the same
interference pattern as the wave propagates.
Sinusoidal waves
Mathematically, the most basic wave is the (spatially) one-dimensional
sine wave (or
harmonic wave or
sinusoid) with an amplitude
described by the equation:
where
- is the maximum amplitude
of the wave, maximum distance from the highest point of the disturbance
in the medium (the crest) to the equilibrium point during one wave
cycle. In the illustration to the right, this is the maximum vertical
distance between the baseline and the wave.
- is the space coordinate
- is the time coordinate
- is the wavenumber
- is the angular frequency
- is the phase constant.
The units of the amplitude depend on the type of wave. Transverse
mechanical waves (e.g., a wave on a string) have an amplitude expressed
as a
distance
(e.g., meters), longitudinal mechanical waves (e.g., sound waves) use
units of pressure (e.g., pascals), and electromagnetic waves (a form of
transverse vacuum wave) express the amplitude in terms of its
electric field (e.g., volts/meter).
The
wavelength is the distance between two sequential crests or troughs (or other equivalent points), generally is measured in meters. A
wavenumber , the spatial frequency of the wave in
radians per unit distance (typically per meter), can be associated with the wavelength by the relation
The
period is the time for one complete cycle of an oscillation of a wave. The
frequency is the number of periods per unit time (per second) and is typically measured in
hertz. These are related by:
In other words, the frequency and period of a wave are reciprocals.
The
angular frequency represents the frequency in radians per second. It is related to the frequency or period by
The wavelength
of a sinusoidal waveform traveling at constant speed
is given by:
[17]
where
is called the phase speed (magnitude of the
phase velocity) of the wave and
is the wave's frequency.
Wavelength can be a useful concept even if the wave is not
periodic in space. For example, in an ocean wave approaching shore, the incoming wave undulates with a varying
local
wavelength that depends in part on the depth of the sea floor compared
to the wave height. The analysis of the wave can be based upon
comparison of the local wavelength with the local water depth.
[18]
Although arbitrary wave shapes will propagate unchanged in lossless
linear time-invariant systems, in the presence of dispersion the
sine wave is the unique shape that will propagate unchanged but for phase and amplitude, making it easy to analyze.
[19] Due to the
Kramers–Kronig relations,
a linear medium with dispersion also exhibits loss, so the sine wave
propagating in a dispersive medium is attenuated in certain frequency
ranges that depend upon the medium.
[20] The
sine function is periodic, so the
sine wave or sinusoid has a
wavelength in space and a period in time.
[21][22]
The sinusoid is defined for all times and distances, whereas in
physical situations we usually deal with waves that exist for a limited
span in space and duration in time. Fortunately, an arbitrary wave shape
can be decomposed into an infinite set of sinusoidal waves by the use
of
Fourier analysis. As a result, the simple case of a single sinusoidal wave can be applied to more general cases.
[23][24] In particular, many media are
linear,
or nearly so, so the calculation of arbitrary wave behavior can be
found by adding up responses to individual sinusoidal waves using the
superposition principle to find the solution for a general waveform.
[25] When a medium is
nonlinear, the response to complex waves cannot be determined from a sine-wave decomposition.
Plane waves
Standing waves
Standing wave in stationary medium. The red dots represent the wave
nodes
A standing wave, also known as a
stationary wave, is a wave
that remains in a constant position. This phenomenon can occur because
the medium is moving in the opposite direction to the wave, or it can
arise in a stationary medium as a result of
interference between two waves traveling in opposite directions.
The
sum of two counter-propagating waves (of equal amplitude and frequency) creates a
standing wave.
Standing waves commonly arise when a boundary blocks further
propagation of the wave, thus causing wave reflection, and therefore
introducing a counter-propagating wave. For example when a
violin string is displaced, transverse waves propagate out to where the string is held in place at the
bridge and the
nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in
antiphase and cancel each other, producing a
node. Halfway between two nodes there is an
antinode, where the two counter-propagating waves
enhance each other maximally. There is no net
propagation of energy over time.
Physical properties
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism
Waves exhibit common behaviors under a number of standard situations, e. g.
Transmission and media
Waves normally move in a straight line (i.e. rectilinearly) through a
transmission medium. Such media can be classified into one or more of the following categories:
- A bounded medium if it is finite in extent, otherwise an unbounded medium
- A linear medium if the amplitudes of different waves at any particular point in the medium can be added
- A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space
- An anisotropic medium if one or more of its physical properties differ in one or more directions
- An isotropic medium if its physical properties are the same in all directions
Absorption
Absorption of waves mean, if a kind of wave strikes a matter, it will
be absorbed by the matter. When a wave with that same natural frequency
impinges upon an atom, then the electrons of that atom will be set into
vibrational motion. If a wave of a given frequency strikes a material
with electrons having the same vibrational frequencies, then those
electrons will absorb the energy of the wave and transform it into
vibrational motion.
Reflection
When a wave strikes a reflective surface, it changes direction, such that the angle made by the
incident wave and line
normal to the surface equals the angle made by the reflected wave and the same normal line.
Interference
Waves that encounter each other combine through
superposition to create a new wave called an
interference pattern. Important interference patterns occur for waves that are in
phase.
Refraction
Sinusoidal traveling plane wave entering a region of lower wave velocity
at an angle, illustrating the decrease in wavelength and change of
direction (refraction) that results.
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the
phase velocity changes. Typically, refraction occurs when a wave passes from one
medium into another. The amount by which a wave is refracted by a material is given by the
refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by
Snell's law.
Diffraction
Main article:
Diffraction
A wave exhibits diffraction when it encounters an obstacle that bends
the wave or when it spreads after emerging from an opening. Diffraction
effects are more pronounced when the size of the obstacle or opening is
comparable to the wavelength of the wave.
Polarization
A wave is polarized if it oscillates in one direction or plane. A
wave can be polarized by the use of a polarizing filter. The
polarization of a transverse wave describes the direction of oscillation
in the plane perpendicular to the direction of travel.
Longitudinal waves such as sound waves do not exhibit polarization.
For these waves the direction of oscillation is along the direction of
travel.
Dispersion
Schematic of light being dispersed by a prism. Click to see animation.
A wave undergoes dispersion when either the
phase velocity or the
group velocity depends on the wave frequency. Dispersion is most easily seen by letting white light pass through a
prism, the result of which is to produce the spectrum of colours of the rainbow.
Isaac Newton performed experiments with light and prisms, presenting his findings in the
Opticks (1704) that white light consists of several colours and that these colours cannot be decomposed any further.
[26]
Mechanical waves
Waves on strings
The speed of a transverse wave traveling along a
vibrating string (
v ) is directly proportional to the square root of the
tension of the string (
T ) over the
linear mass density (
μ ):
where the linear density
μ is the mass per unit length of the string.
Acoustic waves
Acoustic or
sound waves travel at speed given by
or the square root of the adiabatic bulk modulus divided by the ambient fluid density (see
speed of sound).
Water waves
Main article:
Water waves
- Ripples
on the surface of a pond are actually a combination of transverse and
longitudinal waves; therefore, the points on the surface follow orbital
paths.
- Sound—a mechanical wave that propagates through gases, liquids, solids and plasmas;
- Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect;
- Ocean surface waves, which are perturbations that propagate through water.
Seismic waves
Main article:
Seismic waves
Shock waves
Other
- Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves[27]
- Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
Electromagnetic waves
(radio, micro, infrared, visible, uv)
An electromagnetic wave consists of two waves that are oscillations of the
electric and
magnetic
fields. An electromagnetic wave travels in a direction that is at right
angles to the oscillation direction of both fields. In the 19th
century,
James Clerk Maxwell showed that, in
vacuum, the electric and magnetic fields satisfy the
wave equation both with speed equal to that of the
speed of light. From this emerged the idea that
light
is an electromagnetic wave. Electromagnetic waves can have different
frequencies (and thus wavelengths), giving rise to various types of
radiation such as
radio waves,
microwaves,
infrared, visible light,
ultraviolet and
X-rays.
Quantum mechanical waves
The
Schrödinger equation describes the wave-like behavior of particles in
quantum mechanics. Solutions of this equation are
wave functions which can be used to describe the probability density of a particle.
A propagating wave packet; in general, the
envelope of the wave packet moves at a different speed than the constituent waves.
[28]
de Broglie waves
Louis de Broglie postulated that all particles with
momentum have a wavelength
where
h is
Planck's constant, and
p is the magnitude of the
momentum of the particle. This hypothesis was at the basis of
quantum mechanics. Nowadays, this wavelength is called the
de Broglie wavelength. For example, the
electrons in a
CRT display have a de Broglie wavelength of about 10
−13 m.
A wave representing such a particle traveling in the
k-direction is expressed by the wave function as follows:
where the wavelength is determined by the
wave vector k as:
and the momentum by:
However, a wave like this with definite wavelength is not localized
in space, and so cannot represent a particle localized in space. To
localize a particle, de Broglie proposed a superposition of different
wavelengths ranging around a central value in a
wave packet,
[29] a waveform often used in
quantum mechanics to describe the
wave function
of a particle. In a wave packet, the wavelength of the particle is not
precise, and the local wavelength deviates on either side of the main
wavelength value.
In representing the wave function of a localized particle, the
wave packet is often taken to have a
Gaussian shape and is called a
Gaussian wave packet.
[30] Gaussian wave packets also are used to analyze water waves.
[31]
For example, a Gaussian wavefunction ψ might take the form:
[32]
at some initial time
t = 0, where the central wavelength is related to the central wave vector
k0 as λ
0 = 2π /
k0. It is well known from the theory of
Fourier analysis,
[33] or from the
Heisenberg uncertainty principle
(in the case of quantum mechanics) that a narrow range of wavelengths
is necessary to produce a localized wave packet, and the more localized
the envelope, the larger the spread in required wavelengths. The
Fourier transform of a Gaussian is itself a Gaussian.
[34] Given the Gaussian:
the Fourier transform is:
The Gaussian in space therefore is made up of waves:
that is, a number of waves of wavelengths λ such that
kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the
x-axis, while the Fourier transform shows a spread in
wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in
k, and hence in λ = 2π/
k.
Animation showing the effect of a cross-polarized gravitational wave on a ring of
test particles
Gravitational waves
Researchers believe that
gravitational waves also travel through space, although gravitational waves have never been directly detected. Not to be confused with
gravity waves, gravitational waves are disturbances in the curvature of
spacetime, predicted by Einstein's theory of
general relativity.
WKB method
In a nonuniform medium, in which the wavenumber
k can depend on the location as well as the frequency, the phase term
kx is typically replaced by the integral of
k(
x)
dx, according to the
WKB method. Such nonuniform traveling waves are common in many physical problems, including the mechanics of the
cochlea and waves on hanging ropes.
See also
NICS.