Frontiers of Physics Lecture Series: Dr. David Wineland, Fall 2017

Frontiers of Physics Lecture Series: Dr. David Wineland, Fall 2017


[MUSIC PLAYING] KAI-MEI FU: Hello. My name is Kai-Mei Fu, and I’m an experimental
physicist in the University of Washington physics department here, and I’ve been given
the honor of introducing tonight’s guest, David Wineland. I’d first like to thank everyone for joining
us for our fourth Frontiers in Physics public lecture. As you may know, this series was started to
provide our community inspiring, free lectures on the latest advances in physics given by
the people who directly contributed to these advances. The broad range of topics that we have heard
over the past two years mirror the broad impact that physics has on how we view the world,
and also how we impact and affect our world. In our last lecture we learned from Dr. John
Preskill how the spooky and nonintuitive nature of quantum mechanics may lead to unprecedented
computational power. And in the spring, stay tuned. We will hear from leading particle theorist
Dr. Lisa Randall who’s well-known for her work showing that extra dimensions can help
solve some of the fundamental problems in physics. Today we are excited to hear from Dr. David
Wineland, our third Nobel laureate in this series, on his contributions to the measurement
of time. Before introducing David, I would like to
first take a moment to thank Dr. Patrick O’Hara and his wife Dr. Catarina Randolph, who unfortunately
can’t be with us here tonight. It was their vision to start this series. They approached our department thinking that
what our community needed was a public lecture series, bringing the public to learn about
what’s happening in physics today. And because of their generosity it has become
a reality. I would also like to think Phil Ekstrom, who
wrote the program that you all now have on a short history of time keeping. Phil was a PhD student of Hans Dehmelt and
overlapped with David Wineland during his time as a post-doc at the University of Washington. On that note, I would actually also like to
add that if you’re interested in getting involved in the physics department, we would love to
hear from you. There is contact information on the back of
your program. So getting involved with physics can be attending
a lecture. It can be getting added to an email list so
you can hear about all the colloquium that we have and other public and outreach events
that we have going on. And also it can be helping to support our
students’ individual research projects and outreach events that we have for community. And so now onto the introduction. David Wineland– he grew up in California
and graduated from Berkeley with a bachelor’s degree in physics. He then completed his PhD with Norman Ramsey
at Harvard before joining us at the University of Washington for his post-doctoral work with
our own Nobel laureate, Hans Nobel– or, Hans Dehmelt. This was before– [LAUGHS] This before Norman
Ramsey and Hans Dehmelt would later be awarded the 1989 Nobel Prize in physics. And so if you could say one thing, you could
say that David really knew how to choose an adviser, since both of his advisors were chosen. After leaving University of Washington, Dr.
Wineland joined the now National Institutes of Standards and Technologies, and for the
next decade pioneered precision techniques to measure and control individual atoms. The main motivation for this work was precision
time-keeping. But in 1995 there was a theoretical proposal
by Cirac and Zoller on a new way to process information, quantum information processing
of ions. And because of this very fundamental work
on clocks that he had been performing, within months of that proposal he was able to demonstrate
a quantum gate with single– or two ions. This seminal work, this high-precision control
which led to his work in clocks and quantum information processing was honored in 2012
with the Nobel Prize. As some of you know, David Wineland spent
his post-doc here at University of Washington with Hans Dehmelt. And Hans Dehmelt passed away this past year. It is fitting that we’re able to hear today
about new scientific results that, in part, were influenced by Dehmelt’s work. Just as one day in the future we’ll hear of
discoveries that are enabled by David Wineland’s work, which he’ll discuss today. Thanks. [APPLAUSE] Well, thanks for the introduction, Kai-Mei. And first of all, thanks for all of you coming. I mean, one of the nice things about giving
this kind of lecture is to see the large-scale interest from the public. And so I can’t teach you everything about
what we do, but I hope to give you an impression of what we do and some of the simple ideas
that will hopefully come across. So you can see my outline there, what I’ll
talk about. And so I can start by saying, well, what’s
the use of clocks? And throughout history it’s been primarily
for navigation. And that application is still true today. So just going back on a little bit of time–
I’m not a sailor, but any sailor will know what I’m talking about here. And the basic ideas for navigation is you
want to determine your latitude and longitude, and therefore your position on the Earth. And the easy part is the latitude, where the
basic idea is if there’s some distant star– in this case, the North Star– if you measure
the angle of the direction of the star relative to the level, the tangent of the earth, you
can easily determine the latitude that you’re at. The harder part is longitude. And there you need time. And the simple reason is because the Earth
is rotating. So the principle is the same. You’re going to be looking at some distant
star that’s relatively-speaking fixed in your view, and you determine where you are by measuring
the angle between the star and the tangent on Earth where you are. The problem is, because the Earth is rotating,
you need to know time to know exactly what the angle– how the angle corresponds to your
position and longitude. And I’m not going to– I won’t go through
this simple math. The scientists out there will understand this. But the idea is, what are the errors due to
the error in time. And just a simple example I show here. And the idea is that– so a relative distance,
say, the error in the angle can be related to the error in time. And so this section of distance in latitude
is given by this simple formula here. And so the idea, of course, this angle is
changing in time and there’s an imprecision given by the imprecision in time. So in this simple expression, where we’re
rotating at once per day, the radius of the earth is about 4,000 miles. And so for an error of one second in time,
that gives a relative error in latitude position of about a half a kilometer. And so just a little history. Of course, sailors going back many centuries
relied on time to tell, to be able to navigate. And there was several incidents in the early
1700s where the Brits lost ships at sea. And so the British parliament decided to sponsor
a prize, the so-called “Longitude Act,” and the prize was given to someone who could demonstrate
a clock which would allow navigation to about 30 nautical miles, which translated to an
error in time of about two minutes. And there’s a famous story that– many of
you may have read the book. There’s a couple of good books about this–
John Harrison, who came up with the clock. It was actually like a large pocket watch. It was a mechanical clock that he came up
with. And he was able to demonstrate that it was
within the errors that were required by this to win this prize. And part of the story is that the parliament–
you know, they wouldn’t give him his money. And I think it was a couple decades before
he finally– the King stepped in and got him his money, and then I think he died a year
later or something like that. But anyway, it’s a good story. There’s a couple of good books. It’s a good story to read. But of course these days the way we navigate
is via GPS. We kind of take it for granted these days. And the positions can be much, much higher. And the basic idea there is that, to give
you the idea, you want to determine your distance from some satellite. And again, I won’t go through the math, but
the basic idea is that because clocks are so precise, that basically you can think of
a simple protocol. It’s a little bit more complicated than this. But if you agree that this satellite is–
that first of all your clocks have to be synchronized. And if you establish a protocol, say, where
the satellite emits a pulse of radiation every second tick, then there will be a delay because
of the speed of light before it reaches you. And so to give you an idea of the precisions
that are involved, for an error in time of about a nanosecond, 10 to the minus 9 seconds,
that gives an error in distance here of about 30 centimeters. And where clocks come in is, basically we’d
like the clocks to be autonomous, at least over extended periods. And so if we have an error of a nanosecond
over one day, that corresponds to a relative frequency uncertainty of the clocks of about
one part in 10 to the 14. And these are the kind of numbers we’re able
to reach today. Of course, it’s a little bit more complicated
to get three dimensional navigation. There’s a network of satellites. And, in fact, there’s enough redundancy in
the system that the satellites can not only know their position, but the clocks can be
synchronized together so we can get three dimensional position at this level of precision. So anyway, a bit about clocks. I think most physicists’ notion about time
is very similar to the non-scientists. It’s just sort of a measure of the progression
of events. And the basic idea of making a clock then
is, we have some periodic event generator and a counter. And with that then we can generate time. And traditional event generators are, of course,
the rotation of the Earth. And also later the pendulum clocks were invented,
and the precision was quite a bit better than given by the rotation of the Earth. But that’s basically it. So these days though we think of using oscillations
and atoms, and a simple picture you can think about is, say, an electron orbiting around
the nucleus. That’s a pretty good picture. Quantum mechanically it’s a little bit more
complicated. What we think about is, say, an electron orbiting
around the nucleus. The quantum mechanics tells us that the electron
can’t be– its position can’t be precisely defined. But nevertheless, there’s a very precise timing
of the orbit of the electron around the nucleus. And this picture over here is meant to show
a dipole where the electron is actually oscillating through the nucleus. But the basic idea then is that we establish
this oscillation in the atom. And the key part is that, as the early quantum
mechanics told us, is that atoms don’t exist in any arbitrary kinetic energy state. There is this certain discrete energy levels
associated with a electron orbiting around the nucleus. And what the basic idea is, that the frequency
of the oscillation is then given by the energy difference between these two quantum states
divided by Planck’s constant. So one simple mode of using these oscillations
is called a maser or a laser, and we’re more familiar with the laser, actually. The first devices that operated on this principle
were called masers. M stands for “microwave” versus “light.” And the simple picture is, we have atoms that
we stick them inside of a cavity which contains the radiation. We sample a little bit of that, and then we
have a counter to generate time. So a little bit of personal history. I was a graduate student at Harvard with Norman
Ramsey, who’s right here. And Norman and his colleague, Daniel Klepner
had recently invented and demonstrated the first hydrogen masers. So the basic idea of what I showed on the
previous slide. So this was the group in 1967. I started in 1965. That’s me getting to the boss there. You can see us there. [LAUGHTER] Anyway. So Norman wanted to have precise measurements
of all the hydrogen isotopes, the maser oscillation frequency of all the isotopes. So my project turned out to be to make a maser
based on the deuterium isotope. And the experiment wasn’t anything very astounding,
but it taught me a lot of nice techniques in atomic physics. And so this was the result of my thesis here. I’m probably the only one that has this number
memorized. [LAUGHTER] But anyway. But certainly, what came into play all through
my career and the things I’ll talk about that we did later, was basically that the name
of the game here is there’s– although the atoms are inside this cavity, there’s various
perturbations that they can undergo. And so one of the requirements is to precisely
control the environment they’re in. The other thing is that, as they were radiating–
the atoms live in so-called superposition states where they’re radiating. And these superposition states are actually
rather long on the grand scale of superpositions. They lasted about a second. So I’ll give you– there’s another mode of
operation. I showed you the laser or maser mode of operation
of a clock. And the other is where we actually, say, would–
again, here’s the energy structure. And what we’d do in this case is we would
apply radiations to atoms, say, in a cell here. And basically we just look for the condition
where the atoms absorb the radiation maximally. Then we know that the frequency of this oscillator
is equal to the frequency given by the energy level separation. So a recipe for an atomic clock is very simple. The basic idea is, we have some oscillator
here which supplies radiation. And in the previous example we had looked
for the condition of the frequency of this oscillator where the absorption is maximum. And we can think of– I won’t describe the
electronics here, but we can think of a simple servo system that feeds back that forces the
oscillator frequency to be equal to the frequency given by the energy difference. And when we reach that condition we just count
cycles of the oscillator. Now, in fact, a little subtlety here is that
we get a curve that registers this sharp absorption. It’s not infinitely narrow. There’s– one of the reasons it’s not is that
the atoms in the excited state will decay, and that limits the resolution we have with
this absorption feature, we say. And so the only other ingredient to actually
the way we work it is, if we sit right on the top we’re not very sensitive to small
changes. The top of this curve is flat, basically. So the only difference that we do is, we actually
do it in a stepwise section where we’ll first probe on the left side of this absorption
feature, and then we’ll step over to the other side and probe on that side. And when we get an equal response then we
know that the mean of those two frequencies is equal to the resonance frequency. So that’s the only wrinkle beyond what I said
before. But it’s actually no more complicated than
that. So again, a bit more common mode of operation. That was sort of for looking for continuous
absorption. The way most clocks work is just slightly
different. We basically start the atom in the lowest
energy level, and then we apply radiation for a short time. And it’s basically the same idea, is that
when the radiation frequency of the oscillator is equal to the resonance frequency of the
atom given by this expression here, then we know we’re on resonance. And of course we step from side to side of
the absorption feature. And basically then to measure this absorption
we just– ideally, we could look at the maximum absorption probability. But of course we step from side to side of
the line. So it’s just a slight difference from what
I said before. So why atomic clocks? And there’s a couple of strong reasons why
atoms are nice. And in this view graph I’m going to compare
to the pendulum clock, which actually is still a very– good pendulums are extremely good,
almost as good as quartz crystals. But you can think of quartz crystals like–
they have the same conditions– or, some of the same conditions we have to worry about. But let’s take the pendulum clock. So the frequency of a pendulum clock, as students
in physics learn early in their careers, is given by this simple formula where this is
the acceleration of gravity and this is the length of the pendulum bob. And so what are the sensitivities? What can cause the frequency of the oscillation
to change? And one is, say, temperature. The temperature is not precisely controlled
and can be fluctuating. So what happens in this example of the pendulum
is that most suspension rods are a metal usually. And, of course, with temperature changes the
metals usually expand and contract as the temperature cools. And again, I won’t go through the details
of this simple derivation, but the basic idea is that even with materials of very low expansion
coefficients– so this represents the fractional change in the length of the pendulum versus
temperature. We get to about– anyway, we work through
this simple expression here and we get to sensitivity that’s expressed fractionally
of about a part in 10 to the 8th per degree C frequency change of the pendulum clock. Now to atoms. What it’s nice is that– we still have to
worry about temperature effects. And actually, one of the more interesting
one is due to Einstein. And that is the fact that, typically in this
container that we’re holding, the atoms aren’t at rest. They’re moving around. And what Einstein told us was that with two
frames of reference that move relative to each other, time runs at a different rate. It’s not just as simple as saying the clock
here based on the atoms runs at a different rate, but actually time runs at a different
rate. So this was, I mean, an amazing revelation
that changed our notion about nature. And it was due to Einstein. So anyway– but to give you an idea of the
size of the effect, we do have to worry about this in our high-precision clocks. And again, I won’t go through the simple math
here. But for a cesium atom which has a mass of
133 mass units, the frequency of the oscillation will change due to this relativistic time
dilation. And the fractional change per degree C is
given by– is about a part in 10 to the 15th. So many orders of magnitude smaller sensitivity
to temperature changes then this pendulum. And there’s a simple– of course we’re more
familiar these days with quartz crystals in our watches. But also in that case the temperature sensitivity
of those is quite a bit larger than what we can get with atoms. OK. So the other thing we have to think about
is if we make different– if we realize different versions of the clock, how reproducible are
they? And of course with a pendulum clock we have
manufacturing tolerances to worry about. The length of this pendulum for different
realizations can be different, which gives a different oscillation frequency. Of course there can be wear of the pendulum,
the bearing on the pendulum may wear a little bit which will effectively lengthen the distance
of the length of the pendulum bob. And, of course, it depends on the local value
of gravity, which can change around the world and also fluctuates with Earth tides and things
like that. So the nice thing about atoms is that all
atoms of a particular kind– as far as we know so far– they’re absolutely identical. So if we can agree on an atom whose frequency
we measure then we, in principle, can get exactly the same frequency within these environmental
perturbations that we have to worry about. And the other thing is, atoms don’t wear out. I mean, we can take the same atom and repeat
this absorption process, in principle, an infinite amount of times and then they preserve
their properties throughout. So these days, actually starting– the first
cesium clocks– which measured the so-called hyperfine transition at a frequency of about
9 billion cycles per second– it was developed in the early ’50s. And by the mid-’50s, by international agreement,
we used that to the oscillations of the cesium hyperfine transition to define the second. And this definition still holds true today. What I’m going to show you is at least the
performance of the clocks we can make now are better than we can realize with the cesium–
measuring the cesium transition. Of course, cesium was always getting better
through the decades, so it was like a moving target. But we finally were able to overtake their
performance about 10 years ago. So why atomic optical clocks? And one of the simple reasons is that the
oscillation frequencies of electrons around the nucleus are much higher than this so-called
hyperfine transition. In this case, compared to cesium the oscillation
of a typical optical oscillation are about 100,000 times faster than the oscillations
of the cesium clock. And so what that means very simply is, we
get more ticks in any given interval of time, say, the second. So we can define that interval– divide that
interval of time into finer and finer increments. So that’s the simple reason we want to think
about high-frequency. The other thing is that, for certain transitions–
and I’ll say a little bit more about that in a minute– these transitions– the frequency
over which they absorb can be extremely narrow compared to the actual frequencies. So we get a very high relative precision. For most of the cases I’ll talk about, this
width then– which is not infinitely narrow– is given by the lifetime of the upper state
and the transition. So is it a new idea? And the answer is no. And, in fact, actually a colleague at NIST
in Boulder– he did some research, and he published this paper in the early ’70s. And he dug up a text from Thompson Lord Kelvin
and his colleague Peter Tate. And in their paper they acknowledge this idea
was due to Maxwell, the kind of the inventor of the formalism of electromagnetism a long
time ago. Anyway, they wrote this couple of sentences
here. And their idea was basically what we’re talking
about with these atomic clocks. And they basically said, well, there’s recent
discoveries that are seeing the different wavelengths of emission of different atoms. You know, so they had the idea of, well, you
could really– atoms such as hydrogen or sodium, which are relatively available and in nearly
infinite numbers, they’re alike in every physical property. So this was what I was saying. They’re absolutely identical. So they had that idea. And so they were actually thinking about the–
when they say vibration of sodium particles– this was actually optical oscillations that
we’re thinking about. So the basic idea here has been around for
an extremely long time. And, of course, it’s the technology that had
to catch up to be able to realize this idea. Actually, one interesting thing in this same
paragraph– they say, this oscillation of these modes is known to be absolute independent
of its position in the universe. And they can be excused because this was before
Einstein came along and said no, that the rate of time is not absolute. But anyway, they certainly had the basic idea
many, many years ago. So anyway, after my graduate career I came
to the University of Washington and I worked with Hans Dehmelt. And his main interests– I was mostly attracted
by– I’ll say this in a minute. But anyway, he and his colleagues had done
spectroscopy on helium ions, and I was interested by that, and potentially the application in
atomic clocks. But anyway, Hans wanted to focus on an experiment
to measure the electron magnetic moment. And the basic idea there is an electron, in
addition to its charge, has the property that it behaves also like a little magnet. And the reason this experiment was so important
was that the theory of quantum electrodynamics can predict the value of this magnetic moment
to a precision of about– uncertainty of about a part in 10 to the 12th. And so this experiment was very important
because eventually it was able to measure this precision to about that level. And that’s a whole separate story. But anyway, two things to say here is that–
actually, Bob Van Dyke, who’s in the front row down here was the person that actually
led the experiment starting at the time I was here. And I actually didn’t stay around for the
actual important measurements. So these first very precise measurements were
done by Bob Van Dyke and other colleagues here at University of Washington. Another thing to say is that, because Hans
was developing these techniques to confine, in this case, electrons– and previously the
ions– he shared the Nobel Prize with Wolfgang Powell, who invented a slightly different
kind of ion trap, we say, that we actually used for our clock experiments. And I won’t say anything very much about actually
how the trap works. You can see this electrode structure that
I’ve drawn in cartoon form that looks pretty much like this, for the non-technical people
in the audience. A very simple analogy is it’s like this–
when we apply electric fields to this electrode structure we create– for a single ion in
the trap, it’s like a marble in a bowl. We create a so-called harmonic potential. But this analogy to a marble in a ball that
rolls back and forth is actually very good. So the other thing was that I mentioned that
I was attracted– earlier on, actually, Hans and his colleagues– one being Norvell Fortson
who was a post-doc before my time. He’s actually a Ramsay student before my time,
too, but he came to work for Hans. And they were measuring the same kind of so-called
hyperfine structure in helium. And the resolution was very high, so this–
in some sense people gave the ideas and I was certainly interested in how these ideas
might be applied for atomic clocks. So what I’m going to show here is one exper–
focus on one experiment we’ve done in our lab. And I have to say, and I’ll repeat it later,
that this kind of work goes on in many labs throughout the world, and I’m going to focus
on just this example of how we used a mercury ion to realize an atomic clock. And I should also say, we did many different
kinds of experiments with these ions. But this project here, making a frequency
standard out of the mercury ion, was led by Jim Burgquist in our group. So I’ve been a colleague with Jim for the
last 42 years, and so we’ve done a lot of things together. Anyway. So the basic idea here is, again, coming back
to this simple picture. I mentioned before that basically we start
the atom in the ground state, and then we apply radiation for a short amount of time. And then the idea is, we measure whether the
ion has made the transition up to the excited state. And we look for the condition where it does
with the frequency of the laser where it makes that transition with highest probability. This transition is actually in the ultraviolet,
which is a bit of a technical problem. But we can make radiation at that frequency. Anyway, the idea– one of the issues is, how
do we do we detect when this atom has made the transition? Well, let me first say, these experiments
we’ve done for quite a while have been just with a single atomic ion in the trap. So why do we focus on one ion? And there’s one reason– with mercury it turns
out the upper level in the mercury ion– it kind of has– the shape of the electron cloud
around it is kind of like a football. And it turns out that the the inhomogeneous
electric fields on one ion as seen by the other ion interact with this football-shaped
charge distribution and can give a frequency shift. And it’s actually fairly large compared to
the resolution we’re trying to achieve. So that’s one of the reasons, at least our
experiments mostly up until now, have used just one single ion. In any case, the way we detect is to look
at another transition. And this one I should have mentioned here
the lifetime of the upper state in this mercury ion for this clock transition is about a tenth
of a second, which gives this very high resolution we can achieve. So the way we detect our ions– or, at least,
first of all to even identify that they’re in the trap– is we would use another transition
in mercury. And this one can scatter photons at a very,
very high rate, on the order almost– several hundred megahertz per scatter rate. And the nice thing about that is we can not
only identify atoms– and, in fact, in these experiments we can, with a camera that works
in the ultraviolet, we can actually make pictures of our single ions. And actually Dehmelt and his colleagues–
this was after I left, but– they also were working on single-ion experiments. And most of the ions, because there’s one
electron removed to be able to excite the next electron, the transitions are out of
the visible spectrum in the ultraviolet. But there’s a couple ions, and one of those
barium. And so Dehmelt and as colleagues– they actually
made fluorescence of a– or could see it make an experiment like this. And the light that was scattered was in the
blue part of the spectrum. But in these experiments you can actually
see with your eye a single atom, which is pretty amazing when you think about it. It kind of looks like a faint star, but you
can actually see it with your eye. So one of the issues we had to think about
is that– in fact, in the early experiments on helium that I described that had been done
by Norval Fortson and Dehmelt and their colleagues, was that the atoms were actually moving fairly
quickly and bouncing around fairly quickly inside this trap. And so one of the limitations was this time
dilation shift that Einstein told us about. So one of the messages from that was, it would
be nice to have a way to slow down the motion of the ions then to cool them, basically. And this was, again, to suppress this time
dilation effect. So the way this simple form of laser cooling works is the following. And actually when I was here as a post-doc
with [INAUDIBLE], we published a little paper on how this would work. And at the same time, independently, two colleagues,
Ted Haensch and Art Schawlow, who later also won Nobel prizes. Anyway, we had basically the same idea. And the idea, if you’ll bear with me a little
bit– it’s not too complicated. And the basic idea is that I’ve already told
you that atoms, rather than the energy of, say, the electrons rather than existing in
a continuous spectrum of energy states, the energy is confined to discrete energies. So that’s a key part of this. And then, as I said, just for a clock, I haven’t
said much about the motion, but if the atoms are absorbed then when the lasers tune to
this so-called resonance frequency given by the energy difference, then they absorb maximally
at that condition. Now of course what we have to think about
is, in general even though the atoms are confined in this trap, we say they’re moving around
with some amount of kinetic energy. And the idea is that, if the atom is moving
against the laser beam, one thing we have to worry about is this so-called first-order
Doppler shift. And the common example that we all experience
is if a, say– in my day the example was a train that would go by. And the train whistle, as it approached you
would be higher pitch than it was receding from you. And, of course, the same effect with a car
going by or a motorcycle going by, that you hear this change in the pitch of the sound. And the same idea applies to light. So the idea here is that when the atom is
moving against the photons from this laser beam, they actually absorb. But not the frequency they absorb at with
the rest, but the frequency they absorb at is shifted by the velocity divided by the
speed of light. And we can take advantage of that because
the idea is, then what we do is we– a laser comes in from this side and we tune the frequency
lower than the frequency it would absorb at at rest. And the idea is, then when the atoms move
into this resonance condition here the atoms will absorb the light. And when they absorb the photon they get a
momentum kick which, in this case, is against their motion. And then when they re-radiate, generally do
it in all directions. So on average every time they absorb and re-emit
a photon, the momentum is reduced by the momentum of the photon. And so we can repeat this process, do many
scattering events, which gives the slowing process, which allows us to cool the atoms
down. And so– OK. That’s what I just said. And so this has become a standard technique
now in all atomic clocks, because the precisions are high enough that if the atoms are at room
temperature the shift is just too big from this time dilation effect. So we have to invoke this cooling idea. So anyway, a little bit of personal history
is that after my post-doc position in Dehmelt’s lab I got a position at what was then called
the National Bureau of Standards, now called this NIST, the National Institute of Standards
and Technology. And my first job there when I went there was–
there was a cesium beam clock. That’s what this is here. Basically cesium atoms are made to go travel
down this tube, which has a vacuum, and make a stream of cesium atoms. And when they go down this tube we measure
the radiation of this so-called hyperfine transition. But anyway, my group leader at that time,
the person that hired me, had a vision for NIST that we should be doing more research. So luckily he got us some money to try this
idea of laser cooling. And sort of an interesting personal aside
was the fact that– so after Dehmelt and I had this idea– and Dehmelt had taken a sabbatical. It was after I left. And he went to work in Peter Toschek’s lab. He was then in Heidelberg. And I knew they were going to try to demonstrate
this cooling. But we got some money at NIST to try this
experiment. And so they didn’t know about our experiment,
but I knew they were trying to do this. So we were racing, at least. And so what’s interesting is basically without
any– I knew they were working on it, but I had no idea where they were in terms of
their progress. But, interesting– the papers on these first
demonstrations were published about the same time. You can see that ours was published a little
bit earlier. But to be fair, if you look at the dates these
were received at the journal, they beat us by one day. So anyway, as most of you know, I think–
I mean, these experiments are relatively long-term. And if you do it within a few months of each
other it’s certainly a tie. So anyway, both groups got credit for doing
this at the same time. But this was an amazing near coincidence here. So OK. So I’ll say a little bit more about this. So I have already alluded to this. For this mercury ion optical clock we’re going
to use this transition here in the ultraviolet at about 282 nanometers. And how do we measure the transition? So one way we describe this process– again,
we start the atom in the ground state. We applied this radiation for a little while. And then we make what’s called a superposition
state. So the atom is– this is kind of a standard
notation for wave functions, but the wave function in this case after we apply this
radiation, is that the atom is in, we say, a superposition of a lower state and the first
excited state. And the one nice thing– we have a very good
way of measuring when the atom makes the transition. And it’s the idea of the following. When we turn on this other laser then we tend
to– this superposition state that we made– it can, we say, “collapse” into either the
ground state or the excited state. And it does it with a probability that’s related
to these coefficients in front of these components of the wave function. But the key idea is that, when the atom–
suppose we try to drive this transition, and suppose that the frequency of the laser is
a bit off, then the atom remains in the ground state. And we can tell when that happens because
then when we turn on this laser here we’re going to see scattering like in the picture
I showed. And we can pick a bit of that up in, say,
a photomultiplier, some detector. On the other hand, if we’re close to resonance
then there’s a high probability that when we turn on this– [COUGHS] pardon me– this
laser here, that the atom will actually collapse into the upper state. And when that happens we don’t see any scattered
light when we turn on this laser here. So the nice thing is that we can easily discriminate
which state the atom has been detected in. So, for example, this data here is– we were
causing transitions, but the data that we took we would average the fluorescence for
about a millisecond. That you can easily see that if we put a discriminator
here we can tell with essentially 100% efficiency what level the atom is measured to be in. So there’s a lot of details in this. But anyway, some of the interesting ones in
this mercury ion experiment was, basically the electrode structure I showed in cartoon
form– it’s a little hard to see here, but that’s this same structure that I showed in
the cartoon. In this case one of the unfortunate things
is that with mercury the way we would create the ions is, we’d just leak in a little bit
of mercury vapor, and then in those days we’d– very crude experiment– we would just make
a crude electron beam with a homemade filament, and stream electrons through that ring electrode
in the trap. And then when the neutral mercury atoms were
inside, occasionally one would be ionized by this electron beam. And when that would happen the ion would be
trapped. But one of the problems we had was that, it
turns out mercury– if you leak it into a metal vacuum system, which you can see that
this is– the container without the lid– is that the mercury, it turns out, when you
leak it into a vacuum system like that what it does is it amalgamates with copper. But on the other– so what it has, it basically
diffuses into the copper. But the problem is there’s always– even if
we try to pump all the mercury away, the mercury is effusing out of the vacuum. And the problem that made was it turns out
that when the mercury ions were in the excited state, if they collided with a neutral mercury
atom in the background, they would [INAUDIBLE] associate, we’d say, and it would make a mercury
molecule, a dimer, a two-atom molecule. And basically that was the end of our atomic
ion cubits– or ion for the clock. And so it was just a horribly-annoying problem,
because we’d get all the lasers tuned up, and then after 10 [INAUDIBLE] this collision
process would happen and we’d have to reload the ion and tune everything up again. So basically we just hit it with a sledgehammer. And that is, we would we put our ion trap
in its enclosure. We attached it to a liquid helium reservoir,
and basically the liquid helium temperature under normal conditions is at about 4 Kelvin
absolute. So basically everything freezes out, except
for maybe some helium. But helium is not a problem. So literally we went from storage times–
we could keep the iron earlier for only about 10 minutes. And we went for– we could literally increase
the lifetime to about six months. And that was just due to that we’d do something
stupid, which would kick the ion out. But there’s other advantages to going to low
temperature. One is that– I’ve already alluded to. We’d basically freeze out most of the background
gas so the collisions are almost reduced to nothing. Another subtle thing is that it turns out
the clocks are perturbed by thermal radiation, so-called black body radiation. And I was always– when we were first thinking
about this, it turns out the thermal radiation in the room, the electric fields due to that
radiation, are not insignificant. They’re about 10 volts per centimeter. So they’re actually fairly strong fields. And we’re all bathed in this radiation. Anyway, this radiation could shift the frequency
of the clock, and so it would be nice to reduce that. And one way to do it is to put an atom at
a very cold temperature. OK. So there’s other issues. Now, the way I mentioned that the absorption
range of this mercury ion– it would be about 1 Hertz, one cycle per second wide out of
about a million billion cycles per second. So it gave very high resolution. But one of the problems we had going into
this– everybody had– was the laser– although we knew how this should work, the lasers were
not stable enough. In other words, the wavelength– the frequency
and wavelength of the radiation would fluctuate around. So the standard technique that people used
for many decades– and it’s still the way people do these experiments– is basically
we form a resident cavity with two mirrors. And what we do is we shine our laser radiation
in there. And, for the physics students here– they
know this problem– is, basically the cavity will only transmit– will build up radiation
and only transmit radiation through when there is an integer number of half-wavelengths of
this radiation given by the distance between– that fit inside the distance spanned by these
two mirrors. And so basically we can– although there’s
many frequencies that can happen, we basically stabilize our lasers to one of these transmission
conditions, and then this stability is governed just by how well we can control the distance
between the mirrors. And actually, this has a lot of similarities–
you know, this last year with the big splash in the last year or two was to be able to
detect gravity waves by looking at the oscillations of these mirrors. And we’re not nearly as sensitive as those
instruments, but the idea is very much the same. We can stabilize our lasers as long as the
spacing between these mirrors is very precise. So anyway, Jim Bergquist and I had the idea
very simple idea to get rid of mechanical vibrations, as he basically supported. What this cartoon is meant to show here is
one mirror. Another mirror is hidden, but there’s a class
spacer here that’s very rigid and provides a rigid distance between the ends of these
mirrors. One thing– again, it’s no comparison to the
sensitivity of the gravity wave detectors, but what I always found interesting was that
when we’d set this down on the floor to tune up the optics on the small optical table,
is we’d always see some noise very close to the center frequency where the laser would
be locked to this resonance given by the spacing. And we always see some noise on the side at
much less than a Hertz, about a seventh of a Hertz. And that turns out to be the waves crashing
on the beach in California that gives this broad sort of noise spectrum at one seventh
of a Hertz. Anyway, the state of the art these days–
we’re still bothered by vibrations. And this is– we’re now affected by the same
things that, in part, limit the detection sensitivity of the gravity wave detectors. We still have to worry about mechanical vibrations. But what it comes down to now is, we do have
to worry about the spacing of the mirrors, but the main thing we’re left with is actually
the coatings on the mirrors that give the reflectivity. They’re also mechanical systems. They vibrate and they give some noise, which
limits how well we can stabilize, in our case, the lasers to this cavity. OK. We’re coming back– I’m just going to summarize
the many years of work. And first of all, the fact that we’re trapping
the ions– their average velocity is 0, so we get rid of what’s called this first-order
Doppler effect. And what we call the second-order Doppler
shifter time dilation is suppressed highly by this idea of laser cooling. And I’ve already mentioned we suppress several
other effects by going to low temperature in this apparatus. So the one thing that we felt proud about–
and it’s mainly due to Jim Bergquist– was that we had been chasing the performance of
cesium for decades. And so in about 2006 then this was the first
clock that could demonstrate that an optical clock was actually better in terms of performance
than the cesium clocks. And so nowadays all the standards labs for
sure– we’ve all gone to optical clocks. Some of the other ions that people are using–
we’re actually using aluminum these days. It’s a bit better performance than mercury. But there’s many choices. And of course there’s many neutral atom choices. One of the interesting possibilities that
people are looking towards is the thorium nucleus is– has a nuclear transition which,
by coincidence, [? their ?] energy levels, which are separated by close to an optical
transition, as time evolves it appears it’s pretty far in the ultraviolet. But nevertheless this is an interesting idea. Rather than using atomic transitions, to be
able to use nuclear transitions. So let me just come back to these so-called
systematic effects, the environmental effects that cause frequency issues. I’ve told you about the first-order Doppler
shift where we– because the atom is trapped, its velocity goes to zero. And then there’s this so-called time dilation,
or second-order Doppler shift. Now, I lied to you a little bit. Although the atoms is trapped, and it’s true
that its average velocity goes to 0, one thing we have to worry about is that the laser will–
we have an optical table that the laser sits on, and then our trap is, say, at the other
end of the optical table. And what we have to worry about is that the
temperature in the lab changes, the table is shrinking and contracting a little bit. And with the velocities we’re sensitive to
at the level of precision we’re at now are about 3/10 of a nanometer per second. So we have to stabilize the distance between
the laser and the the clock atom in this trap. And what we do there is we actually borrowed
a technique from satellite ranging where basically the satellite ranging is done by sending a
signal out reflecting it off– effectively reflecting it off the satellite. And the signal that comes back is shifted
by the Doppler shift, by basically twice the first-order Doppler shift. And from that we can measure the velocity,
in that case, of the spacecraft. We do the same thing here, only the velocities
are quite a bit smaller than the velocity of a satellite. Or rather, in this case, a deep-space probe
that’s– But anyway, we use exactly this same technique to get to subtract out this Doppler
shift. And I’ve also talked about the time dilation
shift, this famous so-called twin paradox by Einstein. And we suppress that with laser cooling. There’s another effect which is also– well,
anyway. So the net result of our experiment so far
is we’re down to a real fractional precision, which is how we characterize all the clocks. Because doing it that way it’s only– the
real measure of the performance is the relative precision. So we’re down to a level of about 8 parts
in 10 to the 18 with these experiments. So one of the effects we have to worry about
is this so-called– again, due to Einstein’s so-called gravitational potential redshift. And in addition to the time dilation shift
due to movement [? at the ends, ?] Einstein also showed us that clocks run at different
rates in different gravitational potentials. And so when– one example I can give you is
that, suppose you had a twin brother or sister and you were separated at birth and suppose
your twin lived in Boulder, Colorado where we are, about a mile above sea level, and
you lived at sea level. It turns out that there is an effect, but
it’s nothing to really worry about too much. And in fact after 80 years your twin will
be about a millisecond older than you are due to this potential. So it’s a very small effect. Nevertheless, we can see this effect. So kind of as more of a stunt, but kind of
a fun thing to do is to demonstrate this effect on kind of a more everyday scale. So just to kind of show this effect, this
is our optical climate clock. This one is based on aluminum, but the principles
are very much the same on what I described for mercury. And so obviously this is not the one you wear
on your wrist yet. But nevertheless we made a clock like this
in one lab, and in an adjacent lab we had another clock based on aluminum which was,
as far as we could, tell was identical. Anyway, we measured the ratio of the frequencies
of those two clocks, and it was about– I forget how many– on the order of 17 digits,
anyway, that they were running at the same frequency. And then it just says that, as a demonstration,
we– James Chou, who is running this experiment, is basically– you can see he’s put some jacks
under the table here, and he raises the table up by about a foot, 33 centimeters, and we
could actually see that we could resolve this so-called gravitational potential redshift. So the precisions now are such that we do
have to worry about these very small effects, including this gravitational potential redshift. So where are we at these days? So I showed the number we achieved, this fractional
precision, that is taking account of all the environmental effects. So we actually had the lead for a few years,
but now there’s many other– there’s always been many other groups working on that. And so these results have been superseded
more recently by– first of all, we had some colleagues in basically the German version
of NIST where they’ve made a clock based on ytterbium ions, and the precision in their
clock is a little more than a factor of two smaller than in ours. The other thing I haven’t talked about but
it’s important is that– I’ve been talking about single ions. And we, for the reason I mentioned, that we
do want to scale up the number to larger numbers. And it’s simply, we get more signal with more
ions, and so we can improve the time it takes to reach a measurement [INAUDIBLE] precision. So we’re making traps that don’t look the
same, but basically can make strings of ions and therefore increase the numbers that we
can play with. And also, there have been a number of experiments
done with neutral atoms, and they don’t use the same kind of traps, but they’re able to
make traps by using laser beams. And kind of a cartoon version in two dimensions
is kind of like an egg crate that can hold individual atoms, and they can do this in
three dimensions. And so this work first started– the first
high-precision measurements were done by a group of Katori in Japan, but the leader in
more recent years has been Jun Ye and the group at JILA. And they’re down now to where the so-called
systematic precision is down about four times lower than we’re able to do. And the other thing is they can hold quite
a large number, a few thousand, atoms in their so-called egg crate structure. So they can much more quickly reach a high-precision
than we can with our single ion. So we have some catching up to do now. But anyway, so the best precisions are very
close to a part in 10 to the 18th. And just another thing to compare back to
this gravitational potential redshift. They shift due to this gravitational potential
redshift is about one part in 10 to the 18 for a centimeter rise in position. So we’re getting down to these very sensitive
values. Actually, coming back to this one problem
we have, is that if we can– because of this gravitational potential redshift, one of the
problems we have is that in order to compare two clocks, the problem is that– we can make
measurement comparisons between clocks, say, one at sea level and one in Boulder, Colorado. The problem is, we’re limited in precision
because we don’t know the height of Boulder in terms of this gravitational potential redshift
to only about 30 centimeters. So, in fact, with these higher precisions
we’re limited in precision simply because we don’t know the gravitational potential
redshift between sea level and Boulder. So the only way we can make precise comparisons
of these really accurate clocks is we have to bring them together where we know they’re
in the same gravitational potential. So this is kind of a headache. Right? But of course we tried to take the high road
and so one view is, in the future, is to take these very accurate clocks and be able to
map what’s called the geoid, the gravitational potential around the Earth. And I’ve just mentioned some of the groups. There’s many more around the world working
both on ions and neutral atoms pursuing these ideas. So what’s the future? And so navigation is still one of the main
applications. The other is in synchronization. So, for example, in network synchronization,
timing signals, we need these higher and higher precisions. But we still continue to think
about navigation. So at the centimeter scale you’d think, well,
who really cares. And one application that people talk about
is that, for example, a precursor to earthquakes is usually– an earthquake is preceded by
strain in the earth, meaning two relative locations that might be separated by kilometers–
they’re going to their relative height compared to the center of the Earth will change. And this strain– and eventually the strain
will cause the earth to let go and they’ll readjust. And so this– measuring this strain at this,
perhaps, centimeter level can be, maybe not a predictor, but it can be a signal that an
earthquake is building up, or the probability of an earthquake is increasing. And so we think about doing this. And I already mentioned this idea of mapping
the gravitational potential redshift around the world. There’s also a bunch of fun things we can
think about doing. And one of the interesting things is if we
take clocks that are made on different elements, it turns out that the frequencies that they
run at in general depend on different ratios of the fundamental forces, say, the so-called
strong nuclear forces and electromagnetic forces, which– the ratios of the contributions
of the frequencies different basic forces will vary. And so if we measure– if we measure the frequency
ratio of clocks based on two different elements over time, one of the interesting things to
think about is, are the fundamental “constants,” quotes, that govern the frequency of these
transitions, are they changing in time? So we’ve been able to put limits on that. And this continues on. There’s still a lot of motivation to do that. We’re always– as a popular game we’re always
trying to see deviations of Einstein’s theory of general relativity, and so far he’s doing
just fine. But nevertheless, we still try to probe with
higher accuracy to see whether there might be variations. So with that I’d like to thank the real people
doing the group. This is just our group working on these experiments. And there’s many groups around the world working
on these problems. I want to also acknowledge our laboratory
director, Katharine Gabbie, who was laboratory director, meaning the larger group of divisions,
like the Time and Frequency Division. And she was very supportive of some of these
basic ideas. For example, this idea of laser cooling. I mean, we just wanted to do it because it
would be kind of nifty to demonstrate this effect. But it became very important for clocks. And, in fact, it’s used in all high-accuracy
clocks. And she was very supportive of this kind of
exploratory work. Unfortunately she passed away a year or so
ago. But nevertheless, she made a great environment
for us. And I’m going to– for those who might know
a few people there, a list of people– actually, how much time– AUDIENCE: [INAUDIBLE] DAVID WINELAND: [LAUGHS] So just– I mean,
often– and maybe you’ve seen these things before, but I’m going to say a little bit
about the Nobel Prize here. So I think most of you know that– [CLINKING] Oops. The prize is– oh. I’ll find that later, I hope. Anyway, the prize is announced around October
10th or so. And, I mean, the whole thing is very surreal
because, you know, it gets so much attention. But anyway, so they announce the prize in
early October. And then the ceremony is actually coming up
within days here. I think it’s on the tenth of December– anyway,
on the date of Nobel’s death. And anyway, so we go there in early December. And this is– what’s nice– and Stockholm’s
a very charming city and, actually, even during the dead of winter here. You can see the snow on the ground. It was very cold. But they have these open-air markets, and
so– I mean, really, they do a Christmas in a very charming way, I must say. But anyway, actually one thing about this
figure– so we get to spend time wandering around a little bit. And one interesting thing about this figure–
I mean, Sweden is pretty far north, so this picture was at about 3 o’clock in the afternoon,
and you can see the sun has already set. So it’s getting pretty dark pretty early there. So anyway, everything is just way over the
top in these ceremonies. And so this was the awards ceremony, and the
laureates over several disciplines were all lined up in this first row here. And here is the royal family. And so basically the way the ceremony goes
is, each person– a little bit is said about the person, and they walk up and receive the
award from the King. And so you can see everything is very well
organized. We have a rigid uniform. And part of that uniform for us was that we
had– of course we had to wear tuxedos, but also we had to wear patent leather shoes. And these patent leather shoes were– this
was a firm carpet, but walking on this carpet was like walking on ice. And the whole time when I was going up to
receive my award I said, just don’t fall. Anyway. So I made it through OK. This is me receiving the award from the King. And anyway, after this very fancy ceremony
the royal family had a few people over for dinner. [LAUGHTER] And all of this is– I mean, it’s just– everything
is so much over the top. Actually, one thing I didn’t know before going
there is, Nobel actually favored the physicists out of the chemist and the other disciplines,
and so we were the ones that got to sit with members of the royal family at these different
events, which was somewhere here in the middle of this table. The other thing to say is that there’s about–
I forget– I think it was like 1,200 people here. And so what you learn is that, the Nobel laureates
and their guests enforced were only, you knows– at these fancy official events we could only–
each laureate could only invite 12 people, and there were, I think, eight of us– and
now I’m forgetting– eight of us that year to receive the prize. In these different disciplines. And so that’s about 100 guests. And so there was about 1,400 people at this
thing. And what you learn is that this is a big deal
for not only Swedish society, but officialdom. You know? It’s a big deal to go to this event. And it’s also nice they invite some students
from the local universities. But anyway, it’s this very fancy thing. And so the dinner there is extremely well-organized. So there was one of these guys here was, like,
an orchestra conductor. So, you know, there was– I don’t know– 100
or more servers, and they each would serve a few people, but he would wave his wand and
everything would be done in synchrony. And anyway, it was just, as I say, a very–
it was really a surreal thing. But of course very, very fun. So one of the nice things was that the person
I shared the award with was Serge Roche whose lab is in Paris. And I’ve gotten to know Serge– oh gosh, 35
years ago. I first knew him through the literature because
he had done some nice work. And then I got to know him personally about
25 or 30 years ago, and gradually our wives became friends as well. So it was a great pleasure to share with him. And I think we both feel the same way that–
and I think most laureates do– that, you know, the one thing to say is that the probability
of receiving this award, extremely small. And I think we both felt we were lucky to
have it happen. But there’s many qualified people and we more
feel like we represented our field rather than our individual accomplishment. But nevertheless, it was a great, real thrill
for us to share with my friend Serge. Anyway, with that I will stop for the final
time. And this– of course these are the people
in our lab doing the work. [APPLAUSE] KAI-MEI-FU: OK. So thank you very much for that wonderful
talk. I think we have time just for a couple of
questions from the audience. Are there any questions? Yeah. Come on up. There’s a mic right here in the aisle, in
both aisles. AUDIENCE: Hello. Is this mic on? OK. No? Yes. OK. I was curious how you keep time on your person. On your person, out and about, how do you
keep time? DAVID WINELAND: I have a watch here, and it’s
good to about, maybe two minutes. [LAUGHTER] So not very well. AUDIENCE: Thank you. DAVID WINELAND: But probably more seriously,
like all of you I have my cell phone and I rely on that these days to give me a better
time. AUDIENCE: So, another easy question. When you lifted the table up and then moved
it back down, did you get back to the same difference of zero? [LAUGHTER] DAVID WINELAND: Yeah. It was reproducible within our precision. You know? Actually, I must say that this was– you know,
it was a demonstration, but there’s been much more accurate demonstration of this gravitational
potential redshift. And I mentioned– I gave a reference when
we talked about compensating for the Doppler shift due to the expansion and contraction
of our table. The experiment I quoted there was one where
it goes back– it had a rocket which was suborbital. So this rocket went over. I forget where it was launched from but it
went up in this arc and then crashed into the sea. But during this– I don’t know. I forget– roughly an roughly an hour that
it was in this orbit or this trajectory, of course the gravitational potential changed
significantly. Anyway, they were able to measure the gravitational
potential redshift to about a part in 10 to the 6. And ours was only– this thing that I showed
you was about 10%. So just kind of give the the basic idea. And so this rocket on board had a hydrogen
maser, actually. And that was done quite a while ago, I think
in the late ’60s. And that’s still the most accurate measurement
of the gravitational potential redshift, was this early experiment that was done with the
hydrogen maser. AUDIENCE: I was wondering– is the mic on? OK. I can’t hear myself. What makes an atom a good candidate to be
used for an atomic clock? DAVID WINELAND: OK. I mentioned this briefly, but I mentioned
a lot of things. But again, the basic idea has to do with the
fact that all atoms of a given kind, as far as we know, are exactly identical. So we have things that perturb the frequency,
one being these esoteric things like the gravitational potential redshift. But if we bring– if we have two atoms that
undergo the same environmental perturbations, as far as we know, they should run at exactly
the same frequency. And so, as I was trying to make the case there,
for example for a pendulum clock we have to worry about these things like the pendulum,
the length of the pendulum can vary in production. But there isn’t that difference with atoms. As far as we know they’re exactly identical. So– AUDIENCE: I guess my question was, which elements? DAVID WINELAND: Oh. I see. Yeah. There’s no simple answer. In fact, there’s– I don’t know how many. There’s probably 25 different atoms or ions
that people consider. And they all have advantages and disadvantages. And some may be good for reducing some environmental
effect, and others are better for other reasons. So there’s no big winners. I would say nobody has come up with an atom
where this is the choice everybody should be using. One interesting sidenote about that, what’s
amazing is that the cesium clock I mentioned was first demonstrated in the ’50s. And then about the mid-’60s it was decided
it would become the standard for the length of time, the second. And what’s amazing to me is that it was the
best clock from basically in the mid-’60s to about 2006 where we did this optical clock
experiment. So it’s just remarkable that it was the best
choice for this very long length of time. Of course, you know, they were always working
to improvement it, but still it was amazingly good choice. It was the best clock for that length of time. KAI-MEI-FU: So it’s getting late, so unfortunately
it’ll have to end this evening. But I’d like to thank you all again for attending
this lecture. DAVID WINELAND: Yeah. Thank you! [APPLAUSE] [MUSIC PLAYING] [MUSIC PLAYING]

2 thoughts on “Frontiers of Physics Lecture Series: Dr. David Wineland, Fall 2017”

Leave a Reply

Your email address will not be published. Required fields are marked *