ANSWERS TO QUESTIONS ASKED FOLLOWING
THE 2003 COSMOLOGY CONFERENCE IN COLUMBUS, OHIO
Have you published anything in the past 2-3 years or do you plan to in the near
future? How can we purchase it?
Setterfield: The recent papers
can all be referenced here on this website. In addition, a new paper has just
been accepted for web publication by the Journal of Theoretics. The new
paper deals with mass and gravity. I am currently working on another paper with
a mathematician dealing with the recombination of Planck Particle Pairs. We are
hoping to submit that for publication in early 2004. We are also hoping to have
the rewrite of the book Creation and Catastrophe finished within a month
or two and submitted for publication. Anything happening in this area will be
announced on this website.
(2012 note: "Creation and Catastrophe" became our book "The Bible and Geology" which is for sale along with the DVDs "The Christmas Star" and "Anomalies." The paper with Daniel Dzimano was published in 2003, "The Redshift and the Zero Point Energy.")
Quasars give off matter particles besides light. What happened to the particles
from the quasar in our galaxy?
Setterfield: The particles
emitted from quasars generally are ejected at the ‘poles’, or perpendicular to
the plane of the associated galaxy. A pretty good artist’s rendition of this
can be found at the Science Photo Gallery.
Question: Why are two different dating methods used to
date the two star populations? Are they consistent methods? What is the
evidence they are consistent?
Setterfield: First of all, to
quote from A Brief Stellar History, here is what the question concerns:
A study of the Thorium to Neodymium
ratio in stars like our sun has concluded that the minimum age of the sun and
similar Population I stars was 9.6 billion atomic years, with a maximum of
11.3 billion atomic years [Nature, 328, pp.127-131, 9 July, 1987]. From these
figures, a good average age for the sun would be about 10.5 billion atomic
years. This view finds some support from the fact that the age of the oldest
events recorded in Moon rocks ranged from 5.4 billion, through 7 billion, and
up to 8.2 billion atomic years [Science, 167, pp.479-480, 557, 30 January,
1970]. In similar fashion, a recent study of Population II stars using the
Uranium to Thorium ratio has indicated that the oldest stars in our galaxy
have an age of 12.5 ± 3 billion atomic years [Cayrel et al., Nature, 409, p.
691, 8 February, 2001]. Earlier work based on Thorium and Europium abundances
suggested 15.2 ± 3.7 billion atomic years [Cowan et al., Astrophysical
Journal, 480, p. 246, 1997]. Even though the limits of error are larger, the
data from this 1997 study suggest that the actual value should be at the
higher end of the range indicated by the more precise 2001 study. When further
studies are completed, it may be expected that results may converge to a
figure somewhere around 14.5 billion atomic years, which would accord with
data from stellar theory [Riess et al., arXiv:astro-ph/9805201v1, 15 May,
1998]. The difference in atomic age between the two main Populations of stars
is thus about 4 billion atomic years.
In summary, other methods have been
used to date the two populations of stars, and they give concordant results.
However, the methods quoted in the lecture (thorium/neodymium ratio for
population 1 and uranium/thorium ratio for population 2) give results with the
lowest error bars.
Question: What are the estimated sizes of parent bodies of the asteroids – the exploded
planet and its moon?
Setterfield: It is generally
stated that if all the asteroidal material were collected together, they would
make a planet probably smaller than our moon. However, this does not take into
account the material which has been lost into the sun or impacted on other
planetary or moon bodies. The planet would not have been very large, though,
and its moon, of course, smaller. The question behind this question may be “Why
did these two bodies explode when other planets and their moons did not?” The
answer to this lies in the high aluminum (aluminium) 26 content that these
bodies had initially. We see the high resulting magnesium 26 which is the
result of the radio decay of the parent element. This decay would have heated
the planet and its moon rapidly under the conditions which also produced the
high light speed of the early cosmos, and this rapid heating would have driven
the water not only out of the rocks, but would have caused the parent bodies to
explode. The key is in the make-up of the early bodies. This can be referenced
to the Cambridge Atlas of Astronomy, p. 122. For further details, please see A Brief Stellar History.
Question: Please explain more about the Zero Point Energy. If it is electromagnetic, what
frequencies? What polarization? What field strengths? Can it be measured
other than with parallel plates? Do secular scientists discuss it?
Setterfield: I think there is
quite a bit about the ZPE here on my website. You might check the Discussions section regarding the ZPE questions and also my paper Exploring the Vacuum. Yes, the ZPE is electromagnetic in character; its
frequencies range from very low to very high. The cutoff frequency at the high
end is determined by the Planck length. I am unaware of a limit at the low
end. I am not aware of any studies done on polarization of the ZPE (in the same
way the cosmic background radiation is polarized) and did a quick look on the
net for you on this one. The closest I found was the following from the California Institute of Physics and Astrophysics:
Radio waves, light, X-rays, and gamma rays are all forms of
electromagnetic radiation. Classically, electromagnetic radiation can be
pictured as waves flowing through space at the speed of light. The waves are not
waves of anything substantive, but are in fact ripples in a state of a field.
These waves do carry energy, and each wave has a specific direction, frequency
and polarization state. This is called a "propagating mode of the
Each mode is subject to the Heisenberg uncertainty principle. To
understand the meaning of this, the theory of electromagnetic radiation is
quantized by treating each mode as an equivalent harmonic oscillator. From this
analogy, every mode of the field must have hf/2 as its average minimum energy.
That is a tiny amount of energy, but the number of modes is enormous, and indeed
increases as the square of the frequency. The product of the tiny energy per
mode times the huge spatial density of modes yields a very high theoretical
energy density per cubic centimeter.
From this line of reasoning, quantum physics predicts that all of
space must be filled with electromagnetic zero-point fluctuations (also called
the zero-point field) creating a universal sea of zero-point energy. The density
of this energy depends critically on where in frequency the zero-point
fluctuations cease. Since space itself is thought to break up into a kind of
quantum foam at a tiny distance scale called the Planck scale (10-33 cm), it is argued that the zero point fluctuations must cease at a corresponding
Planck frequency (1043 Hz). If that is the case, the zero-point
energy density would be 110 orders of magnitude greater than the radiant energy
at the center of the Sun.
This article implies that every
electromagnetic wave making up the ZPE has a random polarization, so there
should be no net polarization of the ZPE in general.
The ZPE can be detected by a variety of
mechanisms, but to the best of my knowledge, the only way to actually measure it
is via parallel plates (the Casimir effect). Further discussion of this may be
found in my paper Exploring the Vacuum, in particular the section starting on page 8 with
Evidence For The Existence Of The ZPE
Experimental evidence soon built up hinting at the existence of the ZPE,
although its fluctuations do not become significant enough to be observed until
the atomic level is attained. For example, the ZPE can explain why cooling alone
will never freeze liquid helium25, 48. Unless pressure is applied,
these ZPE fluctuations prevent helium’s atoms from getting close enough to
permit solidification15. In electronic circuits, such as microwave
receivers, another problem surfaces because ZPE fluctuations cause a random
“noise” that places limits on the level to which signals can be amplified. This
“noise” can never be removed no matter how perfect the technology.
There is other physical evidence for the existence of the ZPE proving that it
is not just a theoretical construct. One such piece of evidence is something
called the surface Casimir effect, predicted Hendrik Casimir, the Dutch
scientist, in 1948 and confirmed nine years later by M. J. Sparnaay of the
Philips Laboratory in Eindhoven, Holland1. The Casimir effect can be
demonstrated by bringing two large metal plates very close together in a vacuum.
When they are close, but not touching, there is a small but measurable force
that pushes them together. An elegant analysis by Milonni, Cook and Goggin
explained this effect simply using SED physics49. Given that the ZPE
consists of electromagnetic waves, then as the metal plates are brought closer,
they end up excluding all wavelengths of the ZPF between the plates except those
for which a whole number of half-waves is equal to the plates’ distance apart.
In other words, all the long wavelengths of the ZPF are now acting on the plates
from the outside with no long waves acting from within to balance the pressure.
The combined radiation pressure of these external waves then forces the plates
together16, 32. The same effect can be seen on the ocean. Sailors
have noted that if the distance between two boats is less than the distance
between two wave crests (or one wavelength), the boats are forced towards each
The Casimir effect is directly proportional to the area of the plates.
However, unlike other possible forces with which it may be confused, the Casimir
force is inversely proportional to the fourth power of the plates’ distance
apart50. For plates with an area of one square centimetre separated
by 0.5 thousandths of a millimetre, this force is equivalent to a weight of 0.2
milligrams. In January of 1997, Steven Lamoreaux51 reported
experimental verification of these details within 5%. Then in November 1998,
Umar Mohideen and Anushree Roy reported that they had verified the theory to
within an accuracy of 1% in an experiment that utilized the capabilities of an
atomic force microscope52.
Regarding the last part of your
question, I think if you look up “zero point energy” on any of the search
engines, you will find that it is the subject of much discussion among a great
many scientists. Some of the articles are linked in my Discussion section on ZPE.
Question: Aren’t there other instances in Scripture where the ‘earth’ is used as an idiom
for people? (in reference to the earth being divided in Peleg’s day) -- such
as “…the earth was full of wickedness.”
Setterfield: There are a number
of problems with trying to associate the ‘earth’ with ‘people’ in Genesis 10:25
(“Two sons were born to Eber: One was named Peleg, because in his time
the earth was divided;…”). First, there have been many migrations of
people throughout time, and this is not considered anything extraordinary.
Second, there is a clue in his brother’s name: Joktan. Joktan means to 'cut
off', 'make small', 'kill', 'destroy', to 'diminish' or to 'tear off'. Not your
normal name for a child! The name Peleg itself means 'earthquake', 'division',
'channel of water'. It is also the root word for Pelagos, which was the old
Grecian term for the Mediterranean, and for the 'pelagic' or upper ocean region
But the question was about the word
used for ‘earth’ in Genesis 10:25. That word is “eres” or “erets”. It comes
from an unused root probably meaning ‘to be firm.’ The word is used over 2500
times in the Old Testament. According to my Concordance for the NIV, it is
translated as “land” 1150 times, “earth” 524 times, “Egypt” 184 times (when used
in conjunction with “misrayim”), “ground” 160 times, “country” 92 times,
“countries” 39 times, “lands” 34 times, “Canaan” 29 times (when used in
conjunction with “k’na’an”), “world” 20 times, “region” 18 times, “territory” 17
times, etc. It is even translated as “clay” once and “field” once.
In other words, to presume this word
means ‘people’ or ‘population’ is to deny its clear meaning in the Hebrew and
the way it is used in Scripture. The word is used in the phrase “the earth was
corrupt” in Genesis 6:11. But it is important to look at this verse in context:
We know that dirt itself, or a land
mass, cannot become ‘corrupt’ morally. Nor can it be ‘full of violence’ in a
moral sense in and of itself. That is why verse 12, the following sentence is
so important “…for all the people on earth had corrupted their ways.”
Thus earth is defined here as land mass, and the problem is identified as being
with the people, “basar,” which translates “flesh, meat, body, mankind, bodies,
people, life, flesh and blood,” etc. The King James translates this word as
“flesh” – “all flesh had become corrupt”. It is probably because animals also
have ‘flesh’ that the modern translations prefer ‘people’ as the preferred word
to use. The word ‘basar’ itself comes from a root of the same spelling and
means ‘flesh’ or ‘person.’ In other words, the Bible is quite
clear about the distinction between a land mass and a person or population (“am”
in the Hebrew). In Genesis 10:25, the meaning from the men’s names as well as
from the word used is very clear – there was a division of land masses in the
days of Peleg.
“Now the earth was corrupt in
God’s sight and was full of violence. God saw how corrupt the earth had become,
for all the people on earth had corrupted their ways.”
Question: Why wasn’t your mechanism to explain the five anomalies included in ICC03? Why
aren’t you part of ICR’s RATE group?
Setterfield: Regarding the
ICC03, the abstracts were called for over a year ago and at that time I was not
ready to put a presentation together on this subject. Regarding the RATE group,
although I appreciate their work, they and I know that I am not supportive of
the model they are supporting which says that Noah’s Flood, in one year, built
up most of the geologic column. Since they are working from this basis and I
disagree strongly with it, it would have been difficult for us to work
together. I would like to repeat, however, that I appreciate the work they are
Question: According to cosmological models, was Noah’s Flood inevitable from the way God
created the earth? The Bible seems to imply that God sent the Flood due to the
wickedness of man. If the Flood was inevitable from creation, how can we say
that what God created was very good?
Setterfield: This question
actually goes to the heart of “What did God know?” I think the answer can be
found in two places: Isaiah 45:18 in combination with 46:10, and in Revelation
Isaiah – For this is what the
Lord says – he who created the heavens, he is God;
He who fashioned and made the
He founded it; He did not create it to be empty,
but formed it to be inhabited –
…”I make known the end from the
from ancient times, what is still
I say: My purpose will stand,
And I will do all that I please.”
And then from Revelation 13:8 – “…the Lamb that was slain from the creation of the world.”
In other words, God knew man would
sin. He had already prepared salvation for men in Christ. Yes, the Flood was a
response to the wickedness of men; nevertheless it had been planned at the
inception of creation. And whether or not we think the newly created world was
very good, God called it so. All was prepared and ready to go – including the
consequences of sin, which He knew was coming.
Question: Using your proposed theory concerning changing c, h, and ZPE, is it possible to
mathematically convert the ages obtained from standard radiometric dating to
“true” orbital ages? For instance, if a fossil is dated as one million years
old, does it convert to six thousand years old? What is the conversion factor?
Setterfield: Yes, it is
possible to convert the ages so that they harmonize. This is done using the
redshift curve, which the same curve as light speed against time and the same
curve as radiometric dates against orbital dates. Information on this can be
seen on this website at the following URL’s
http://www.setterfield.org/cdkcurve.html -- The basic redshift curve
http://www.setterfield.org/biblicaldisc.htm#chronology – a discussion of
redshift and time.
http://www.setterfield.org/scriptchron.htm -- the last chart on this page
directly addresses itself to your question, although the entire article might be
Question: Do you agree with Dr. Humphreys’ conclusion that the quantized redshift (seen in
every direction from earth) is very strong evidence that the earth is
essentially at the center of the universe?
Setterfield: Dr. Humphreys and
I have a disagreement here. I do not feel the quantized redshift indicates our
position in the universe. Since no matter where one would stand in the
universe, that point would be ‘here and now,’ then from any point, looking out
would be looking back in time. This means that from every point in the
universe, the quantized redshift shells would appear to be expanding out from
that particular point, making every point appear to be the center of the
universe if someone was standing there observing.
Question: Is the assertion by most scientists that the speed of light always was, is now,
and ever shall be constant, science or faith?
Setterfield: Neither. It is
deliberate ignorance of the facts. For those unaware of the facts, it is faith
in those who instructed them. The data itself does not allow that conclusion.
Question: Is ‘c’ immune from the Second Law of Thermodynamics?
Setterfield: It is important to
remember that the laws of thermodynamics deal with heat transfer in a closed
system. So the Second Law of Thermodynamics really has nothing to do with light
speed. However, what the questioner might have been asking is whether or not
the generalized tendency towards increased entropy (increased disorganization)
applies to light speed. No, it does not. Entropy involves that which is
organized in structure, not the speed of electromagnetic radiation. The
original term “decay” in reference to light speed slowing is very misleading.
The speed of light itself has not decayed at all, really. Rather, the obstacles
(virtual particles) which absorb and then re-emit any given photon of light has
increased with the increase in the Zero Point Energy in space, thus causing
light to take longer to reach its final point of absorption – its destination.
However, in between obstacles, the speed of light remains the same as the moment
it was emitted from the atom. There is, in short, no such thing as ‘tired
light.’ There is simply light being held up in its travel by denser mediums. A
lay explanation can be found here at Setterfield Simplified.
Setterfield: The response to
these two questions was given in Ex Nihilo, volume 4 Number 3, October
1981. On pages 56-81, I have an article entitled “The Velocity of Light and the
Age of the Universe, Part Two”. The relevant paragraphs, which are quoted
below, may be found on page 69. The ellipses mark where previous equations in
the article were deleted for the sake of this quote:
Dr. Lucas showed that radiohalos negate the change in the speed of light you
propose. What is your response?
Question: Do you have an explanation of the non-fuzziness and agreement with today’s
physical constants of the radio halos?
in radiometric dating of rocks would maintain that pleochroic haloes provide
evidence that the decay constants have not changed. Crystals of mica and
other minerals from igneous or metamorphic rock often contain minute
quantities of other minerals with uranium or thorium in them. The α-particles
emitted by the disintegrating elements interact with the host material until
they finally stop. This produces discolouration of the host material. The
distance that the α-particles travel is dependent firstly upon their kinetic
energy. As the binding energy for a given radioactive nuclide is constant
despite changing c, so also is the energy of the emitted α-particle.
This arises since the α-particle mass is proportional to 1/c2 … but
its velocity of ejection is proportional to c… Thus as kinetic energy
is given by ½ mv2, it follows that this is a constant. As the
α-particle moves through its host material, it will have to interact with the
same number of energy potential wells…which will have the same effect upon the
α-particle energy as now. In other words, if we might put it colloquially,
the α-particle’s energy will run out at the same position in the host material
independent of the value of c.
It might be
argued, however, that the α-particle’s velocity of ejection is proportional to
c, so it should carry further. This is not the case, though, as a
moment’s thought will reveal. Another reason that the particle slows up is
that the charge it carries interacts with the charges on the surrounding atoms
in the host material. In the past with a higher value of c, the
α-particle mass was lower proportional to 1/c2… The effective
charge per α-particle mass is increased by a factor of c2. In
other words, the same charge will…attract or repel the lower mass much more
readily. Consequently the slowing effects due to the interaction of charges
is increased in proportion to c2 exactly counteracting the effects
of mass decrease proportional to 1/c2 which resulted in the higher
velocity. In other words, the radii of the pleochroic haloes formed by
ejected α-particles will remain generally constant for all c.
If c is slower now than at Creation, due to collisions with space particles,
over many years of traveling through space, how do you explain the slow speed
detected in short paths of travel through vacuum equipment in Earthbound labs?
Setterfield: The two types of
slowing are different, but related. In both cases, the photons must travel
through denser mediums. The lab experiments have not been in total vacuums, but
have been involved with condensates, for example sodium atoms, at a very low
temperature – close to absolute zero. This article from Scientific American entitled Frozen Light should help it be more understandable.
In space, the slowing is provided by
the absorption and re-emission of light photons by virtual particles appearing
and disappearing at “instantaneous” speeds, however with the increasing number
of them through time (there are billions of them in a cubic centimeter of
space), the apparent speed of light has slowed – meaning the time it takes light
to get from its point of origin to its final destination is longer.
Thus, although the methods used are
different, the effect of slowing light is basically the same as light is being
forced to travel through more and more ‘difficult’ mediums.
Does new light (just generated near the observer) travel through space at the
same speed as old light that has been on its way to the observer for six
thousand years? How do the photons synchronize their speed across the Universe?
: At any instant the
speed of light throughout the universe is the same unless the light is being
obstructed or delayed due to some local cause. Thus the answer to the first
part of the question is yes, the light emitted at any time will reach the
observer at the same speed, as the light in transit is affected by the
properties of space itself. The ‘synchronization’ effect is due to the fact
that the Zero Point Energy, which is the controller of light speed in the final
analysis, is the same throughout the universe at any given moment. If this is
still muddy in the mind of the questioner, it might be helpful to read Exploring the Vacuum
and the Discussion page on the ZPE
. As soon as the Journal of Theoretics has it posted, we will link my
new article concerning mass and gravity and the origin of the ZPE here as well.
Has there been a decrease in the speed of light from 1980 to 2003? Why – why
Setterfield: Although the speed
of light changes are quite minor today and, in fact, a fluctuation is evident,
there is positive evidence that the speed of light is still slowing, as
explained in my analysis of the Pioneer 10 and 11 anomalies.
ZPE seems to contradict entropy and thermodynamics in that it draws boundless
energy from “nothing” in a sense. Do you consider it a universal law God has
instituted to hold creation together?
Setterfield: The ZPE is not the
result of ‘nothing.’ It is the result of the potential energy which came from
the initial stretching of the universe converting to the kinetic energy of the
spinning and separated Planck Particle Pairs and then from their recombination,
which released the energy we refer to as the zero point energy. There is no
violation of the standard laws of physics involved.
Would the electron orbital predictions that arise out of the model of the atom
described by Collins be affected by the transfer of zero point energy to the
electron that forms part of the Setterfield model? (note: the model of
the atom being spoken of is on the Common Sense Science website.)
Setterfield: The model of the
atom that is often used for initial calculations is the Bohr model – the
standard model found in most textbooks; by contrast the Common Sense Science (CSS)
model replaces an electron in an orbit with a ring electron made up of three
fibres of circulating charge. The Bohr atom model is certainly affected by the
ZPE. Electrons move faster when the ZPE is lower, and electron masses increase
with a higher ZPE. In the variable lightspeed (Vc) model, the electron orbits
take on higher energy values when the ZPE is higher. At the same time, with
increasing ZPE, the speed of light will drop, and electron masses will increase.
Since it is true that the CSS model has charge circulating in the electron ring
at the speed of light, it would be expected that any change in the speed of
light would have an effect. Thus one would also expect a concordant change in
their equations. In the classical concept of the electron, its mass can be
attributed entirely to electro-magnetic self-interaction. This presumably has
its equivalent in the CSS model. On that basis, it is possible that there may be
equivalent changes in the CSS model, but to be sure of that it would be better
to ask CSS to examine their equations and give a response since they know their
model better than I do.
Setterfield: Firstly, it is not
Planck Particles which interact with the photons of light. It is the virtual
particles, which are much larger. There is, in fact, what might be called a
‘zoo’ of such particles with a variety of energies. Therefore all wavelengths
and frequencies would be uniformly affected.
Question: You mentioned a Planck Particle interaction with a photon as a slow down
mechanism for light. Looking at the cross section of a Planck Particle, do you
expect it to affect more of one frequency than another?
Setterfield: I am not aware of
this claim, and would like to know more about it. However, in the Proceedings
for the International Creation Conference 2003, starting at page 269, we find a
relevant report. Basically, the team headed by Dr. Steve Austin has attempted
to establish that different radioactive decay series have given different dates
for the same strata. The stratum involved is Precambrian and immediately above
the Vishnu Shist. It is called the Bass Rapids diabase sill. The article
claims, from this data, that under accelerated radioactive decay conditions, the
rates of acceleration are different for different radioactive mother/daughter
pairs. Basically, they show that rubidium/strontium determinations yield an age
around 1100 million atomic years for the sill, while potassium/argon yields only
around 900 million atomic years. Other isotopes have their own concordant dates
within the range between the two. They suggest three alternatives: 1) argon
inheritance, 2) argon mixing, or 3) change in the radio isotopic decay rates “by
different factors.” To these may be added a fourth option that is indicated by
a changing speed of light. At the time that the sill was being intruded, the
speed of light was approximately five million times its current speed.
Therefore, the range of 200 million atomic years reflected above in the dates
quoted is accounted for. It would only have taken forty orbital, or actual,
years during which the sill was cooling for this range of dates to have been
Steve Austin showed the radiometric dates from recent lava flows in the Grand
Canyon were the same as the lava in the Vishnu Shists underneath the Grand
Canyon. Can your model account for this data?