Setterfield: A lot of cosmologists and science journal editors didn't think so. Neither did those editors who commissioned major articles on the topic.
There are in fact periodicities as well as redshift quantisation effects. The periodicities are genuine galaxy-distribution effects. However, they all involve high redshift differences such as repeats at z = 0.0125 and z = 0.0565. The latter value involves 6,200 quantum jumps of Tifft's basic value and reflects he large-scale structuring of the cosmos at around 850 million light-years. The smaller value is around 190 million light-years. This is the approximate distance between super-clusters.
The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them. The lowest observed redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster. This comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years. It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do with galaxy size or distribution. Finally, Tifft has noted that there are red-shift quantum jumps within individual galaxies. This indicates that the effect has nothing to do with clustering. (November 16, 1999.)
In Douglas Kelly's book, Creation and Change: Genesis 1.1 - 2.4 in the light of changing scientific paradigms (1997, Christian Focus Publications, Great Britain) a changing speed of light is discussed in terms of Genesis. Endeavoring to present both sides of the variable light speed argument, he asked for a comment from Professor Frederick N. Skiff. Professor Skiff responded with a private letter which Kelly published on pp. 153 and 154 of his book. The letter is quoted below and, after that, Barry Setterfield responds.
Setterfield: During the early 1980's it was my privilege to collect data on the speed of light, c. In that time, several preliminary publications on the issue were presented. In them the data list increased with time as further experiments determining c were unearthed. Furthermore, the preferred curve to fit the data changed as the data list became more complete. In several notable cases, this process produced trails on the theoretical front and elsewhere which have long since been abandoned as further information came in. In August of 1987, our definitive Report on the data was issued as The Atomic Constants, Light and Time in a joint arrangement with SRI International and Flinders University. Trevor Norman and I spent some time making sure that we had all the facts and data available, and had treated it correctly statistically. In fact the Maths Department at Flinders Uni was anxious for us to present a seminar on the topic. That report presented all 163 measurements of c by 16 methods over the 300 years since 1675. We also examined all 475 measurements of 11 other c-related atomic quantities by 25 methods. These experimental data determined the theoretical approach to the topic. From them it became obvious that, with any variation of c, energy is going to be conserved in all atomic processes. A best fit curve to the data was presented.
In response to criticism, it was obvious the data list was beyond contention - we had included everything in our Report. Furthermore, the theoretical approach withstood scrutiny, except on the two issues of the redshift and gravitation. The main point of contention with the Report has been the statistical treatment of the data, and whether or not these data show a statistically significant decay in c over the last 300 years. Interestingly, all professional statistical comment agreed that a decay in c had occurred, while many less qualified statisticians claimed it had not! At that point, a Canadian statistician, Alan Montgomery, liaised with Lambert Dolphin and me, and argued the case well against all comers. He presented a series of papers which have withstood the criticism of both the Creationist community and others. From his treatment of the data it can be stated that c decay (cDK) [note: this designation has since been changed to 'variable c' or Vc] has at least formal statistical significance.
However, Zero Point Energy and the Redshift takes the available data right back beyond the last 300 years. In so doing, a complete theory of how cDK occurred (and why) has been developed in a way that is consistent with the observational data from astronomy and atomic physics. In simple terms, the light from distant galaxies is redshifted by progressively greater amounts the further out into space we look. This is also equivalent to looking back in time. As it turns out, the redshift of light includes a signature as to what the value of c was at the moment of emission. Using this signature, we then know precisely how c (and other c-related atomic constants) has behaved with time. In essence, we now have a data set that goes right back to the origin of the cosmos. This has allowed a definitive cDK curve to be constructed from the data and ultimate causes to be uncovered. It also allows all radiometric and other atomic dates to be corrected to read actual orbital time, since theory shows that cDK affects the run-rate of these clocks.
A very recent development on the cDK front has been the London Press announcement on November 15th, 1998, of the possibility of a significantly higher light-speed at the origin of the cosmos. I have been privileged to receive a 13 page pre-print of the Albrecht-Magueijo paper (A-M paper) which is entitled "A time varying speed of light as a solution to cosmological puzzles". From this fascinating paper, one can see that a very high initial c value really does answer a number of problems with Big Bang cosmology. My main reservation is that it is entirely theoretically based. It may be difficult to obtain observational support. As I read it, the A-M paper requires c to be at least 1060 times its current speed from the start of the Big Bang process until "a phase transition in c occurs, producing matter, and leaving the Universe very fine-tuned ...". At that transition, the A-M paper proposes that c dropped to its current value. By contrast, the redshift data suggests that cDK may have occurred over a longer time. Some specific questions relating to the cDK work have been raised. Penny wrote to me that someone had suggested "that the early measurements of c had such large probable errors attached, that (t)his inference of a changing light speed was unwarranted by the data." This statement may not be quite accurate, as Montgomery's analysis does not support this conclusion. However, the new data set from the redshift resolves all such understandable reservations.
There have been claims that I 'cooked' or mishandled the data by selecting figures that fit the theory. This can hardly apply to the 1987 Report as all the data is included. Even the Skeptics admitted that "it is much harder to accuse Setterfield of data selection in this Report". The accusation may have had some validity for the early incomplete data sets of the preliminary work, but I was reporting what I had at the time. The rigorous data analyses of Montgomery's papers subsequent to the 1987 Report have withstood all scrutiny on this point and positively support cDK. However, the redshift data in the forthcoming paper overcomes all such objections, as the trend is quite specific and follows a natural decay form unequivocally.
Finally, Douglas Kelly's book Creation and Change contained a very fair critique on cDK by Professor Fred Skiff. However, a few comments may be in order here to clarify the issue somewhat. Douglas Kelly appears to derive most of his information from my 1983 publication "The Velocity of Light and the Age of the Universe". He does not appear to reference the 1987 Report which updated all previous publications on the cDK issue. As a result, some of the information in this book is outdated. In the "Technical And Bibliographical Notes For Chapter Seven" on pp.153-155 several corrections are needed as a result. In the paragraph headed by "1. Barry Setterfield" the form of the decay curve presented there was updated in the 1987 Report, and has been further refined by the redshift work which has data back essentially to the curve's origin. As a result, a different date for creation emerges, one in accord with the text that Christ, the Apostles and Church Fathers used. Furthermore this new work gives a much better idea of the likely value for c at any given date. The redshift data indicate that the initial value of c was (2.54 x 1010) times the speed of light now. This appears conservative when compared with the initial value of c from the A-M paper of 1060 times c now.
Professor Skiff then makes several comments. He suggests that cDK may be acceptable if "Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant." This is in fact the case as the 1987 Report makes clear.
Professor Skiff then addresses the problem of the accuracy of the measurements of c over the last 300 years. He rightly points out that there are a number of curves which fit the data. Even though the same comments still apply to the 1987 Report, I would point out that the curves and data that he is discussing are those offered in 1983, rather than those of 1987. It is unfortunate that the outcome of the more recent analyses by Montgomery are not even mentioned in Douglas Kelly's book.
Professor Skiff is also correct in pointing out that the extrapolation from the 300 years data is "very speculative". Nevertheless, geochronologists extrapolate by factors of up to 50 million to obtain dates of 5 billion years on the basis of less than a century's observations of half-lives. However, the Professor's legitimate concern here should be largely dissipated by the redshift results which take us back essentially to the origin of the curve and define the form of that curve unambiguously. The other issue that the Professor spends some time on is the theoretical derivation for cDK, and a basic photon idea which was used to support the preferred equation in the 1983 publication. Both that equation and the theoretical derivation were short-lived. The 1987 Report presented the revised scenario. The upcoming redshift paper has a completely defined curve, that has a solid observational basis throughout. The theory of why c decayed along with the associated changes in the related atomic constants, is rooted firmly in modern physics with only one very reasonable basic assumption needed. I trust that this forthcoming paper will be accepted as contributing something to our knowledge of the cosmos.
Professor Skiff also refers to the comments by Dr. Wytse Van Dijk who said that "If (t)his model is correct, then atomic clocks should be slowing compared to dynamical clocks." This has indeed been observed. In fact it is mentioned in our 1987 Report. There we point out that the lunar and planetary orbital periods, which comprise the dynamical clock, had been compared with atomic clocks from 1955 to 1981 by Van Flandern and others. Assessing the evidence in 1984, Dr. T. C. Van Flandern came to a conclusion. He stated that "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing with respect to dynamical phenomena ..." This is the observational evidence that Dr. Wytse Van Dijk and Professor Skiff required. Further details of this assessment by Van Flandern can be found in "Precision Measurements and Fundamental Constants II", pp.625-627, National Bureau of Standards (US) Special Publication 617 (1984), B. N. Taylor and W. D. Phillips editors.
In conclusion, I would like to thank Fred Skiff for his very gracious handling of the cDK situation as presented in Douglas Kelly's book. Even though the information on which it is based is outdated, Professor Skiff's critique is very gentlemanly and is deeply appreciated. If this example were to be followed by others, it would be to everyone's advantage.
Setterfield: In the 1987 Report which is on these web pages, we show that atomic rest masses "m" are proportional to 1/(c2). Thus when c was higher, rest masses were lower. As a consequence the energy output of stars etc from the (E = m c2) reactions is constant over time when c is varying. Furthermore, it can be shown that the product Gm is invariant for all values of c and m. Since all the orbital and gravitational equations contain G m, there is no change in gravitational phenomena. The secondary mass in these equations appears on both sides of the equation and thereby drops out of the calculation. Thus orbit times and the gravitational acceleration g are all invariant. This is treated in more detail in General Relativity and the ZPE.
Comments: I quote the emininent gravitation theorist Charles Misner,intending no offense in quoting his terminology. He wrote:
Setterfield: The main thrust of these comments is that if new work does not build on the bases established by general and special relativity, then it must be an "improper" theory. I find this attitude interesting as a similar view has been expressed by a quantum physicist. This physicist has accepted and taught quantum electrodynamics (QED) for many years and has been faithful in his proclamation of QED physics. However, when he was presented with the results of the rapidly developing new interpretation of atomic phenomena based on more classical lines called SED physics, he effectively called it an improper theory since it did not build on the QED interpretation. It did not matter to him that the results were mathematically the same, though the interpretation of those results was much more easily comprehensible. It did not matter to him that the basis of SED was anchored firmly in the work of Planck, Einstein and Nernst in the early 1900's, and that many important physicists today are seriously examining this new approach. It had to be incorrect because it did not build on the prevailing paradigm.
I feel that the above comments may perhaps be in a similar category. The referenced quote implies that this lightspeed work does "not have roots and branches reaching out, securing them into the other, more firmly established, theories of physics." However, I have gone back to the basics of physics and built from there. But if by the basics of physics one means general and special relativity, I admit guilt. However, there is a good reason that I do not build on special or general relativity and use the types of equations those formalisms employ. In most of the work using those equations, the authors put lightspeed, c, and Planck's constant, h, equal to unity. Thus at the top of their papers or implied in the text is the equation: c=ħ=1. Obviously, in a scenario where both c and h are changing, such equations are inappropriate. Instead, what the lightspeed work has done is to go back to first principles and basic physics, such as Maxwell's equations, and build on that rather than on special or general relativity. This also makes for much simpler equations. Why complicate the issue when it can be done simply?
There is a further reason. With significant changes to c and h, it may mean that general relativity should be re-examined. A number of serious scientists have thought this way. For example, SED physics is providing a theory of gravity which is already unified with the rest of physics. This approach employs very different equations to those of general relativity. A second example is Robert Dicke, who, in 1960, formulated his scalar-tensor theory of gravity as a result of observational anomalies. This Brans-Dicke theory became a serious contender against general relativity up until 1976 when it was disproved on the basis of a prediction. Note, however, that the original anomalous data that led to the theory still stand; the anomaly still exists in measurements today, and it is not accounted for by general relativity. A third illustration comes from 2002. In this last year, over 50 papers addressing the matter of lightspeed and relativity have been accepted for publication by serious journals. These facts alone indicate that the last word has not been spoken on this matter. It is true that the 2002 papers have been tinkering around the edges of relativity. Perhaps the whole issue probably needs an infusion of new thinking in view of the changing values of c and this other anomalous data, despite the comments of Misner. For these 3 reasons I am reluctant to dance with the existing paradigm and utilize those equations which may be an incomplete description of reality.
Therefore, I plead guilty in that I am not following the path dictated by relativity. But this does not necessarily prove that I am wrong, any more than SED physics is wrong, or that Brans and Dicke were wrong in trying to find a theory to account for the observational anomalies. On the basis of common sense and the history of scientific endeavor, I therefore feel that the "requirement" presented above may legitimately be ignored.
The following response was posted as part of a general discussion of the material by a third person:
Setterfield: "Science must not neglect any anomaly but give nature's answers to the world humbly and with courage." [Sir Henry Dale, President of the Royal Society, 1981]
By neglecting the anomalies associated with the dropping speed of light values, and neglecting the anomalies associated with mass measurements of various sorts and the problems with quantum mechanics, those adhering to relativistic theory have left themselves open to the charge that relativity has become theory-based rather than observationally-based.
Thanks [to the second correspondent] for your summation of the situation, which is largely correct. The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches earlier epochs astronomically. At the same time, the product hc has been experimentally shown to be invariant over astronomical time, just as you indicated. These experimental results, the theoretical approach that incorporated them, and their effects on the atom, were thrashed out in the 1987 Report. These ideas have been subsequently developed further in Exploring the Vacuum. Later, Reviewing the Zero Point Energy refined this further. In this way the correspondence principle has been upheld from its inception, and I had thought that this part of the debate was substantially over. However, those unfamiliar with the 1987 Report would not be expected to know this and may wonder about its validity as a result.
As far as intra-system energy conservation is concerned, that issue was partly addressed in the 1987 Report where it was shown that atomic processes were such that energy was conserved in the system over time, and in more detail in the main 2001 paper. The outcome was hinted at in the Vacuum paper in the context of a changing zero-point energy and a quantized redshift. However, another paper dealing with these specific matters is proposed.
Finally, I, too, am dismayed by the hijacking of SED work by the new-agers, but that should not cloud the valid physics involved.
From another person:
Setterfield: The key issue which is raised here concerns the work of Webb et al that indicated there was a change in alpha, the fine structure constant, by about one part in 100,000. A couple of points. The first problem is that this result is very difficult to disentangle from redshifted data. One first has to be sure that this change is separate from anything the redshift has produced. The second potential difficulty is that all the data has been collected from only one observatory and may be an artifact of the instrumentation. This latter difficulty will soon be overcome as other instruments are scheduled to make the observations as well. That will be an important test. The third point is that observational evidence, some of which is listed in the 1987 Report, indicates that the product hc is invariant with time. This only leaves the electronic charge, e, or the permittivity of free space, epsilon, to be the quantities giving any actual variation in alpha, unless alpha itself is changing. However, this whole situation appeals to my sense of humor. Physicists are getting excited over a suspected change of 1 part in 100,000 in alpha over the lifetime of the universe, but ignore a change of greater than 1% in c that has been measured by a variety of methods over 300 years.
It was noted that the results of the variable c (Vc) research applied to the early universe might “indicate that high energy physics is governed by classical rather than quantum mechanics at extreme temperatures and densities.” In response, it is fair to say that I have not investigated that possibility. What this research is showing is that the basic cause of all the changes in atomic constants can be traced to an increase with time in the energy density of the Zero-Point Energy (ZPE). Thus the ZPE was lower at earlier epochs. This has a variety of consequences which are being followed through in this series of papers, of which the Vacuum paper is the first. One consequence of a lower energy density for the ZPE is a higher value for c in inverse proportion as the permittivity and permeability of space are directly linked with the ZPE. Another consequence concerns Planck’s constant h. Planck, Einstein, Nernst and others have shown mathematically that the value of h is a measure of the strength of the ZPE. Therefore, any change in the strength of the ZPE with time also means a proportional change in the value of h. The systematic increase in h which has been measured over the last century as outlined in the 1987 Report implies an increase in the strength of the ZPE. Thus the invariance of hc is also explicable. But a lower value for h means that quantum uncertainty is also less in those epochs. This in turn means that atomic particle behaviour was more classical in the early days of the cosmos. This result seems to be independent of the temperature and density of matter, but does not deny the possibility of other effects. The final matter that the Vacuum paper addresses is the cause for the increasing strength of the ZPE. The work of Gibson allows it to be traced to the turbulence in the ‘fabric of space’ at the Planck length level induced by the expansion of the cosmos.
Setterfield: Many thanks for bringing these papers to my attention. However, they do not pose the problem to the SED approach and/or the Variable c (Vc) model that the questioner supposes. Basically, on the Vc model, quantum uncertainty is less at the inception of the cosmos because Planck's constant times the speed of light is invariable. Therefore, when the speed of light was high, quantum uncertainty was lower. But other issues are also raised by the papers referred to above. Therefore, let us take this a step at a time.
In the Lieu and Hillman paper of 18th November 2002 entitled “Stringent limits on the existence of Planck time from stellar interferometry” they specifically state in the Abstract that they “present a method of directly testing whether time is ‘grainy’ on scales of … [about] 5.4 x 10-44 seconds, the Planck time.” They then use the techniques of stellar interferometry to “place stringent limits on the graininess of time.” Elucidation of the rationale behind their methodology comes in the first sentence, namely “It is widely believed that time ceases to be exact at intervals [less than or equal to the Planck time] where quantum fluctuations in the vacuum metric tensor renders General Relativity an inadequate theory.” They then go on to demonstrate that if time is ‘grainy’ or quantised, then the frequencies of light must also be quantised since frequency is a time-dependent quantity. Furthermore, they point out that quantum gravity demands that “the time t of an event cannot be determined more accurately than a standard deviation of [a specific form]…” This form is then plugged into their frequency equations which indicate that light photons from a suitably distant optical source will have their phases changed randomly. But interferometers take two light rays from a distant astronomical source along different paths and converge them to form interference fringes. They then conclude “By Equ. (11), however, we see that if the time quantum exists the phase of light from a sufficiently distant source will appear random – when [astronomical distance] is large enough to satisfy Equ. (12) the fringes will disappear.” This paper, and their subsequent one, both point out that the fringes still exist even with very distant objects. The conclusion is that time is not ‘grainy’, in contradiction to quantum gravity theories. This result is a serious blow to all quantum gravity theories and a major re-appraisal of their validity is needed as a consequence. Insofar as these results also call into question the very existence of space-time, upon which all metric theories of gravity are based, then considerable doubt must be expressed as to the reality of this entity.
However, this is not detrimental to the SED approach, since gravity is already a unified force in that theory. It is in an attempt to unify gravity with the other forces of nature, including quantum phenomena, that quantum gravity was introduced. By contrast, SED physics presents a whole new view of quantum phenomena and gravity, pointing out that both arise simply as a result of the “jiggling” of subatomic particles by the electromagnetic waves of the Zero-Point Energy (ZPE). Since this ZPE jiggling is giving rise to uncertainty in atomic events, this uncertainty is not traceable to either uncertainty in other systems or to an intrinsic property of space or time. This point was made towards the close of my Journal of Theoretics article “Exploring the Vacuum”. As a consequence, it becomes apparent that time is not quantised on this Vc approach.
Ragazzoni, Turatto and Gaessler use more recent data to reinforce the original conclusions of Lieu and Hillman. These latter two then expand on their 2002 approach in their 27th January 2003 paper “The phase coherence of light from extragalactic sources – direct evidence against first order Planck scale fluctuations in time and space.” They take some Hubble Space Telescope results from very distant galaxies to reinforce their earlier conclusions. In the Abstract of this 2003 paper they also state that “According to quantum gravity, the time t of an event cannot be determined more accurately than a standard deviation of [a specific form]…likewise distances are subject to an ultimate uncertainty…” They then use this distance uncertainty relationship with light from astronomical sources to demonstrate that there is no ‘graininess’ to space at the Planck length.
Here is the key point. In order to obtain an uncertainty in distance, they multiply the uncertainty in time by the speed of light. If there is no uncertainty in time, as the Vc model indicates, then the equations used by Lieu and Hillman cannot be employed to discover if there is any uncertainty in distance at the Planck length. Furthermore, the final statement in their 2003 Abstract, namely that “The same result may be used to deduce that the speed of light in vacuo is exact to a few parts in 1032”, is also incorrect for the same reason. Nevertheless, insofar as they are using quantum gravity equations and the resulting concept of the graininess of space-time, these results indicate that space-time is not grainy, and therefore quantum gravity is conceptually in error.
However, there are other ways of determining whether or not space itself is ‘grainy’ at the Planck length level. If metric theories of gravity have any validity at all, and the work of Lieu and Hilman has cast serious doubt on this, then an approach suggested by Y. Jack Ng and H. van Dam may soon provide observational evidence for the existence of the graininess of space-time. They write in their Abstract “We see no reason to change our cautious optimism on the detectability of space-time foam with future refinements of modern gravitational-wave interferometers like LIGO/VIRGO and LISA.” [arXiv:gr-qc/9911054 v2 28th March 2000]. Their metric equations indicate that over the size of the whole observable universe, an expected fluctuation of only 10-15 metres would manifest as quantised gravitational waves. Upcoming refinements in gravitational wave interferometers will soon allow this degree of uncertainty to be detected. If these refinements do not detect quantised gravitational waves, then there is further trouble for some metric theories of gravity. Indeed, as at the moment of writing, no gravitational waves have been detected at all by these expensive interferometers. If this situation continues to exist with the proposed refinements, then the validity of General Relativity may be called into question and the SED option become more attractive.
A different approach has been adopted by Baez and Olson which suggests that wave fluctuations the size of the Planck length are the only ones expected to exist if the fabric of space exhibits graininess at that scale. As a result, such graininess will be undetectable to gravitational wave th January 2002].
The outcome from this discussion is that the granular structure of space is still a very viable option when the SED approach is followed through, as it is in the Vc model. The anonymous reader’s final two paragraphs therefore draw incorrect conclusions. However, if, as on the Vc model, a decrease in the value of Planck's constant can also be construed as meaning a decrease in the uncertainty of time, then part of the problem that has been raised by these Hubble telescope observations may be overcome. If the decrease in the uncertainty of time at the inception of the cosmos is followed through, then this may provide an answer for the problem that these observations pose to theories of quantum gravity. Thus the graininess of space is not called into question in the Vc approach. (April 3, 2003 updated)
In response to a request for a slightly more simple response, Barry wrote the following:
The theoretical basis for these experiments is the expected fuzziness or granularity of space and time that emerges from those theories that attempt to meld general relativitistic concepts of gravity with those of quantum mechanics. The respective papers by Lieu and Hillman, as well as Ragazzoni et al, have concentrated on the expected fuzziness or graininess of time. They deduced that if such a quantum fuzziness or granularity for time really exists, then there will be a smearing of light photons from a sufficiently distant source which will give slightly blurry pictures of very distant astronomical objects, the blurriness increasing with distance. As it turns out, the Hubble Space Telescope pictures of the most distant objects are sharp, not blurry. This may call into question the whole concept of quantum gravity. However, the newly developing branch of SED physics has a completely consistent approach to gravity that is already unified to other physical concepts, and therefore does not need “unifying” in the way that quantum gravity theories attempt to do. On this basis, the HST images of distant objects supports the SED approach rather than the quantum gravity approach.
Furthermore, these results are not detrimental to the variable speed of light (Vc) model. On this approach, quantum uncertainty was getting less the further back into the past we go. This uncertainty is given by Planck’s constant, h. At the inception of the cosmos, h was very much smaller than it is now. Since the units of Planck’s constant are energy multiplied by time, this means that the uncertainty in time was very much less (of the order of 1/107)for those distant astronomical objects. On that basis, the results from the Hubble Space Telescope are entirely explicable on the Vc model.
Setterfield: This question raises an important issue. Consider the behaviour of an infinitely long beam of light from an object at the frontiers of the cosmos, or a wavetrain associated with a single photon of light, entering a medium such as air or water from a less dense medium when compared with a cosmologically changing ZPE. In the first instance, imagine the beam or wavetrain going from air into glass in such a way that the light ray is moving perpendicularly to the glass. In this case, “every point on a given wavefront enters the glass slab simultaneously and, hence, experiences a simultaneous retardation, since the velocity of light is less in glass than in air. The wave fronts in the glass are therefore parallel to those in the air but closer together…” [Martin & Connor, Basic Physics, Vol.3, p.1193-194]. Thus the wave fronts bunch up in the glass as the waves behind approach the glass with higher speed, and so crowd together in the denser medium. The same effect can be seen on a highway with cars when an obstacle in the path slows the traffic stream, and cars bunch up near the obstacle. What is causing the effect is that this example has two concurrent values for lightspeed; one in air, the other in glass.
This situation does not apply to emitted light traveling through a cosmos where the ZPE is changing. In this case, an infinitely long beam or a photon wavetrain is traveling through the vacuum. The energy density of the vacuum is smoothly increasing simultaneously throughout the universe. This means that the infinitely long beam and the wavetrain have all parts slowing SIMULTANEOUSLY. In other words there is no bunching up effect because all parts of the beam or wavetrain are traveling with the same velocity. A similar situation would exist with cars on a highway if all cars were simultaneously slowing at the same rate. The distance between the cars would remain constant, but the number of cars passing any given point per unit of time would be lessening proportional to the speed of the traffic stream. For that reason, in the lightspeed case, wavelengths remain fixed in transit and the frequency, the number of waves passing a given point per unit time, drops in a manner proportional to the rate of travel. Therefore in a situation with cosmologically changing ZPE, the frequency of light is lightspeed dependent, while the wavelength remains fixed. It was the experimental proof of this very fact that was being seriously discussed by Raymond T. Birge in Nature in the 1930’s.
Another consideration applies here also. The equation for the energy E of a photon of light is given as E = hf where h is Planck’s constant, and f is frequency. In the situation which applies here, the energy density of the ZPE is uniformly increasing, and h is a measure of the energy density of the ZPE. Thus, as the ZPE strength increases, so does h. But it has been suggested by some that the frequency, f, should remain constant for light in transit with these changes. This is what some are suggesting. But this is not the case. If it were the case, it means that every photon in transit through the universe must be gaining energy as it travels. In other words, if that were the case, energy is not conserved. However, with the formulation that has been adopted in the paper under review, energy is conserved as would be expected, and so it is frequency that varies, not wavelength.
Setterfield: The approach that the reviewer has given in item 2 dictates the response that he sees as appropriate to item 3. He seems to have mistakenly applied the results obtained from light traveling in an in homogeneous medium to those in which there are simultaneous cosmos-wide changes in the medium. This is inappropriate as noted above and does not agree with experiment. If the approach is adopted that deals with a situation with simultaneous cosmological changes, then the redshift originates in the I have indicated in my work, and the reviewer's basic objection has already been answered.
However, the applicability of Maxwell’s equations is also called into question here. It is occasionally mentioned that these equations imply a constant speed of light in the vacuum, and any variation elsewhere is treated on the basis of a changing refractive index of the medium concerned. As noted above, this approach is inappropriate for the situation considered in this paper. Since Maxwell’s equations seem to imply a constant value for c in a vacuum, this condition can only apply to a changing c scenario when seen from the atomic point of view. Let us explore this a little.
Since all atomic processes are linked to the behaviour of the ZPE as is lightspeed, then as c declines with increasing ZPE, so too does the rate of atomic processes, including atomic clocks and atomic frequencies. Therefore, as seen from the atom, lightspeed is constant and frequencies are constant. Thus Maxwell’s equations apply to an atomic frame of reference when c is varying cosmologically. This means that for Maxwell’s equation to apply in our dynamical or orbital time frame, we have to correct the atomic time that is used inherently in those equations to read dynamical time instead. When that is done, it is the frequency which varies, not the wavelength. In order to see this in a simple way, we note that the basic equation for lightspeed is c = f w where f is frequency and w is wavelength. The units of c are, for example, metres per second while the frequency is events per second. Thus it is the “per second” part of this equation that needs to be altered. Since wavelengths, w, will be in metres, and these have no time dependence, then all the “per second” changes can only occur in c and f. Thus it is the frequency that will vary under these conditions with varying c, not wavelength.
There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages. Physics has currently seemed to reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century. Those that are underlying the reviewer's criticisms have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell. As mentioned in point 1, permeability was related to the speed of light early in the twentieth century, but divorced from it later and declared invariant. It was invariant by declaration, not by data, and this is the first backwards move which has influenced the reviewer's thinking here. Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary. Until about 1960 it was the wavelength that was considered the basic quantity for measurement. However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role. The data dictates something else, however. It is wavelength which remains constant and the frequency which varies when the speed of light changes. This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself.
In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and Maxwell in terms on Einstein’s work. Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions.
One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning paradigm. These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern measured changes in Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift. Modern physics seems to be showing a preference for ignoring much of this in favor of current theories. That is not the way I wish to approach the subject.
The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.” The material has also been updated in Reviewing the Zero Point Energy.