3. ZPE and Atomic Constants’ Behavior
Inevitably, if the strength of the ZPE has increased with time, a number of physical quantities dependent upon the ZPE will also be affected. An examination of this development therefore is necessary, and we begin with Planck’s constant.
As noted above, Planck, in 1911, published his ‘second theory’ where the existence of the ZPE was deduced. His equations revealed the ZPE strength, or its energy per unit volume (energy density), depended upon what is now called Planck’s constant, h . The ZPE appeared as a temperature-independent additional term in the blackbody radiation equation. The final equation for the total energy density, ρ, of all the radiation then had the form 
where υ is the radiation frequency, c is the speed of light, and k is Boltzmann’s constant. If the temperature, T, in (1) is allowed to drop to zero, we are still left with the Zero Point term, which is temperature independent. Note also that Planck’s constant, h, in the Zero Point term simply appears as a scale factor to align theory with experimental results and no quantum interpretation is required. Indeed, (1) implies that h is a measure of the ZPE energy density. Thus, if the strength of the ZPE increased, h would proportionally increase; so we can write
where U is uniquely the energy density of the ZPE term in (1) and the symbol ~ means “proportional to” throughout this paper (the normal sign is not being accepted by the website program). This relationship may also be expressed as:
In equation (2A), h1 is the present value of h while h2 is its value at some distant galaxy, that is, at an earlier epoch. This nomenclature will be adopted throughout this paper.
Several matters now need to be considered. If the quantized redshift data from both nearby and distant galaxies are to be accepted, they may indicate that we live in a universe that is currently static. For example, redshift quantizations exist throughout the Virgo cluster of galaxies. But galaxies in the cluster centre have the quantization washed out by their genuine orbital velocities, which are greater than elsewhere . Motion therefore destroys the quantization. The first conclusion is that the actual motion of galaxies in clusters is minimal, except near the cluster centers. The second is that the redshift quantization cannot be due to recessional motion or the expansion of space-time. Four distinct pieces of evidence will be discussed later indicating that the cosmos is not currently expanding, but rather is static. Later discussion and Appendix 1 also reveal the relativistic Doppler formula approximation for the redshift versus distance may be fortuitous, since Type Ia supernovae show this relationship breaks down at large distances. This problem is solved in the model adopted here without the need for accelerating expansion, but the model shows redshift is still a measure of distance.
In support of a non-expanding cosmos, Narliker and Arp have shown that a static, matter-filled universe is stable against collapse, though small oscillations would occur . The data may thus indicate that there was a rapid initial expansion of the cosmos to its current size, after which it stabilized with oscillations. The oscillations in a static universe would tend to increase the energy per unit volume of the ZPE when the cosmos was at its minimum position, and decrease the strength of the ZPE at maximum expansion. If the universe has several modes of oscillation, a graph of its behaviour, and hence that of the strength of the ZPE, might be expected to contain flat points in a manner similar to that described by E.A. Karlow in the American Journal of Physics, 62:7 (1994), 634. Thus, even after the ZPE built up to its maximum value, it would be expected that there will be cosmological oscillations in its strength. The build-up in the ZPE, plus any oscillations and flat points in its strength, should thereby be echoed in the experimental values of ZPE dependent quantities.
There is experimental evidence for a variation in h along with synchronous variation in other atomic quantities related through the ZPE. Thus, it has been generally conceded that the officially declared value of h has increased systematically up to about 1970. After this, the data either show a flat point or a small but continuing decline. In 1965, Sanders pointed out that the increasing value of h could only partly be accounted for by the improvements in instrumental resolution . Indeed, such an explanation does not appear to be quantitatively adequate. This is emphasized since other quantities such as (e/h), where e is the electronic charge, (h/2e) or magnetic flux quantum, and (2e/h) or Josephson constant, all show synchronous trends centered around 1970 even though measured by different methods from those used for h. The officially declared values of h and the h/e ratio are listed in Table I. Figure 1 graphs the values of h where the trend becomes apparent. Inspection of Table I reveals that the variation in the data for h comes in the fourth and subsequent numerals. All the data are online in the 1987 Report.
Changes in the ZPE energy density also result in changes in lightspeed, c. Studies on the Casimir effect reveal this. Here, the energy density of a vacuum in a flask is locally lowered between two enclosed, parallel metal plates compared with the vacuum outside the plates. The reduction of the vacuum energy density between the plates occurs because the only ZPE waves which can exist there are those which can fit an integral number of half wavelengths between the plates. The excluded wavelengths exert a (Casimir) force on the plates which tends to collapse them. In early 1990, both Scharnhorst and Barton published analyses of the effect that the lowered energy density inside the plates had on the speed of light [42, 43, 44]. Their conclusions were confirmed by a 1995 study on c in ‘modified vacua’, including the Casimir vacuum. The Abstract of this later analysis read in part: “Whether photons move faster or slower than c depends only on the lower or higher energy density of the modified vacuum respectively” . The analysis concluded that in all vacua “It follows automatically that if the vacuum has a lower energy density than the standard vacuum, [lightspeed] v >1, and vice versa”, where v = 1 denotes the current speed of light. In this context, an anisotropy in c was claimed to exist on a large scale in the direction of 5.2 hours Right Ascension and -67 degrees Declination where c values were claimed to be uniformly higher . This is the location of the Large Magellanic Cloud (LMC). If that research is ever verified, the LMC may be acting as a large Casimir plate with our Galaxy by lowering the ZPE along this axis thus giving higher c.
Since the ZPE effectively determines the properties of the vacuum, any changes in the strength of the ZPE will mean that the vacuum’s electrical permittivity, ε, and magnetic permeability, μ, are also changing. But with these changes in vacuum properties, the vacuum must remain a non-dispersive medium, otherwise photographs of distant astronomical objects would appear blurred. This requires the ratio of electric energy to the magnetic energy in a traveling wave to remain constant. In turn, this means the intrinsic impedance of free space, Ω, must be invariant. It then follows from the definition of intrinsic impedance that 
Thus Ω will always bear the value of 376.7 ohms. From (3) it follows that with all these changes, c must vary inversely to both the vacuum permittivity and permeability, so that
This means that, at any given instant, c would have the same value in all frames of reference throughout the cosmos. Experimentally, this constraint has been verified by Barnet, Davis, & Sanders . It might be objected that any such variation in the components of (4) is contrary to the theory of Relativity. This objection is usually voiced because, according to Relativity, free space should have the same properties to any observer in motion. Thus the individual values of both ε and μ should be the same for all inertial frames. The conclusion is that c must therefore be constant. However, this demand of Relativity is still fulfilled if, at any instant, the individual values of ε and μ are isotropic and homogeneous cosmologically, a condition that is maintained here. The issue of varying lightspeed and Special Relativity is discussed in more depth in Appendix 7.
Now, classically, the energy density of an electromagnetic field is given by U, with E and H being the electric and magnetic intensities of the waves, which are proportional to their amplitude. The standard equation then reads :
Let us apply (5) exclusively to the intrinsic properties of the vacuum. The energy density, U, then refers to the vacuum Zero-Point Fields while E and H refer specifically to the electric and magnetic intensities of the ZPF. If the intensities of the electromagnetic waves of the ZPE, or their proportional amplitudes, remain unchanged, both E and H will also remain unchanged as the strength of the ZPE varies. Therefore, as U varies so does ε and μ such that
The last step in (6) follows from (4). This has implications for radiant energy emission that are discussed in Appendix 2. Thus (6) indicates c is inversely related to U, so we can write
A number of authors demonstrated that serious problems facing cosmologists could be solved by a very high value for c at the inception of the cosmos. Thus in 1987, V.S. Troitskii proposed that c was initially 1010 times c now and it then declined to its present value as the universe aged, along with synchronous variations in several atomic constants . In 1993, Moffat published two articles that suggested a high c value during the earliest moments of the universe with an immediate drop to its present value [51, 52]. Albrecht and Magueijo proposed in 1999 that c was 1060 times c now at the origin of the cosmos . John Barrow agreed, but suggested it dropped over the lifetime of the cosmos . Unlike Troitskii and this paper, these authors did not consider synchronous changes in related atomic constants. Without synchronism based on the ZPE, deep space data demands any c changes be very limited.
Because experimental evidence indicated c was declining, an ongoing discussion occurred in scientific journals from the mid 1800’s to the mid 1940’s,. This decline was in evidence up to about 1970. Physicists, who have a strong preference for the constancy of atomic quantities, were forced to admit with Dorsey: “As is well known to those acquainted with the several determinations of the velocity of light, the definitive values successively reported … have, in general, decreased monotonously from Cornu’s 300.4 megametres per second in 1874 to Anderson’s 299.776 in 1940…” . Re-working the data could not avoid that conclusion.
But the declining values of c were noticed much earlier. In 1886, Newcomb reluctantly concluded that the values of c obtained around 1740 were in agreement with each other, but were about 1% higher than in his own time . In 1941, Birge made a parallel statement while writing about the c values obtained by Newcomb, Michelson and others around 1880. Birge had to concede that: “…these older results are entirely consistent among themselves, but their average is nearly 100 km/s greater than that given by the eight more recent results” . Yet these scientists held to a constant c, which makes their admission more significant.
Another example is of interest. In 1927, M.E.J. Gheury de Bray was responsible for an initial analysis of the c data . Then, after four new determinations by April of 1931, he said “If the velocity of light is constant, how is it that, INVARIBLY, new determinations give values which are lower than the last one obtained. … There are twenty-two coincidences in favour of a decrease of the velocity of light, while there is not a single one against it” . Later in 1931 he said, “I believe that in any other field of inquiry such a discrepancy between observation and theory would be felt intolerable” . The c values recommended by Birge in 1941 are given in Table II with a plot in Figure 2. As in the case with Planck’s constant, h, the variation in c also occurs in the fourth significant figure over the time range of the c data.
In all, 163 determinations of c with thousands of individual experiments using 16 methods over 330 years comprise data for declining c values. These data were documented along with the synchronously changing atomic constants in a white paper for Stanford Research Institute (SRI) International and Flinders University in August of 1987 . We suggested the decline in c implied a high initial value at the inception of the cosmos, in synchrony with these ‘constants’. In 1993, Montgomery and Dolphin performed a thorough statistical analysis of all the data and confirmed the declining trend was significant, but had flattened out in the period 1970 to 1980 . This flattening influenced a decision to make c a universal constant in 1983.. Since then, it has only been possible to detect changes in c indirectly; for example, by comparing orbital phenomena with atomic phenomena. This will be discussed in detail below.
However, in this context, E. Greaves has pointed out that the Pioneer anomaly, as evidenced by the faster than expected signal returns from the Pioneer 10 & 11 space probes, and supporting evidence from Galileo and Ulysses, indicate that the energy density of space has become less. He points out that, very recently, similar evidence has been found from other spacecraft [D. Shiga, “Fly-by may be key to Pioneer anomaly”, New Scientist, 19 August 2006, p.13]. As a result he calculates the speed of light at 20 Astronomical Units on his model is now 7.924 km/s higher, which on this paper’s model, becomes about 3.9 km/s higher throughout the cosmos. The link to Professor Greaves’ paper, which is currently undergoing review, is: [http://arxiv.org/abs/physics/0701130]. In terms of the analysis in this present paper, these data are in agreement with the synchronous trends exhibited by the other atomic constants discussed here and evidenced by the graphs in Figures 1 to 6 where the turn-around in the previous trend occurred about 1970. There have been criticisms of the approach adopted by this paper based on Special Relativity, supernovae & Cepheid variables, radioactive decay, Doppler shifts, and pulsars. Criticisms based on Special Relativity; supernovae & Cepheid variables; radioactive decay; Doppler shifts; and pulsars are dealt with in Appendices 7, 2, 4, 6 and 8 respectively.
In order to get to the key reason why experiments failed to detect c variation post 1970, several matters need to be settled first. To begin, we note that (2) and (6) indicate that
This conclusion is supported to an accuracy of parts per million by observations out to the frontiers of the cosmos [63-68], including studies of the fine structure constant, α. This constant is a combination of four physical quantities such that α = [e2/ε][1/(2hc)] , where e is the electronic charge. Indeed, Ford in reference , p. 1152 considers the fine structure constant to be a measure of e2 in natural units. Now, from (7), hc is invariant throughout the cosmos, but the observational evidence for α also requires that, throughout the universe,
An exception occurs in strong gravitational fields as shown in . This is due to a change in the self-energy of the system that is analogous to the change in the stored energy of a charged air capacitor taken to a region of differing dielectric constant. On this basis, the “missing mass” problem is resolved in Appendix 3. Now (6) has ε proportional to U, so that
But many experiments measure e in the context of the permittivity of its environment. Thus changes in e alone often have to be deduced from other quantities such as the ratio h/e. This ratio should be proportional to the square root of U, while h is directly proportional to U. Now the variation in h is in the fourth significant figure, so variations in its square root should be in the fifth figure. When the h/e data in Table I and Figure 3 are examined, the variation in this ratio is seen to occur in the fifth significant figure, which supports (7B).
To achieve accord with Maxwell’s equations and a time varying c, frequencies are required to vary as c and wavelengths must remain fixed . One reason is that, since photon energies are conserved in transit through space with time-varying c and h, then we have
Now h is inversely related to c as in (7), so it follows from (8) that a photon’s frequency, f, must be inversely related to h and directly related to c. Thus, as the photon travels through space and its speed declines, the wavelength, λ, remains unchanged, but its frequency drops in proportion to the speed. So for a cosmologically time-varying ZPE, and hence c, we have
Note that this is a different situation to that normally encountered in which light goes from, say, air into glass. In this case, the wavelength varies as light goes from one medium into another. In this situation, as an almost infinite wave-train from a distant quasar goes perpendicularly from air into glass “every point on a given wavefront enters the glass slab simultaneously and, hence, experiences a simultaneous retardation, since the velocity of light is less in glass than in air. The wave fronts in the glass are therefore parallel to those in the air but closer together…” [Martin & Connor, Basic Physics, Vol.3, p.1193-1194]. Thus the wave fronts bunch-up in the glass as the waves behind approach the glass with higher speed, and so crowd together in the denser medium. However, an entirely different situation pertains to light traveling through the universe with a cosmologically increasing ZPE. In this ZPE case, the almost infinite wave-train from a distant quasar will have all its waves throughout the universe slowing simultaneously. In other words, there is no bunching up effect because, over the length of the entire wave train, all parts are traveling with the same velocity at any given time and slowing at the same rate. Thus it is only the number of waves passing per second, that is to say the frequency, that changes in proportion to the velocity.
Equations (8) and (9) lead to an observational fact pointed out by Birge . Light-speed had been measured as varying without an observed change in wavelengths compared with the standard metre. Birge admitted that this evidence allowed only one conclusion. He stated “if the value of c … is actually changing with time, but the value of λ in terms of the standard metre shows no corresponding change, then it necessarily follows that the value of every atomic frequency ... must be changing” . This follows from the basic equation
Since λ is unchanged here, then, as Birge noted, frequency f must obey the equation
These equations, and Birge’s conclusion that atomic and photon frequencies obey (11), point to one reason for the declared constancy of c in 1983. From 1972 to 1983, the experimental value of c was obtained using lasers. This method measured the frequency of the light at known wavelengths using atomic time and frequency standards to determine c from equation (10). As Birge pointed out, atomic frequencies are changing synchronously with c, so no variation in c can be detected using those methods, which indeed gave a fixed value for c. But there is a basic reason why atomic frequencies (as distinct from photon frequencies) obey (11) and it comes from the behavior of atomic masses – which are ZPE dependent.
Schroedinger, de Broglie, and many physicists consider subatomic particles to be massless, point-like charges, which are sometimes called partons. The electromagnetic waves of the ZPE impinge upon these point-like charges, causing them to randomly jitter in a manner similar to Brownian motion. Schroedinger referred to this “jitter motion” or “trembling motion” by its German equivalent, Zitterbewegung. In the usual model, first proposed by Dirac, the fluctuations of this Zitterbewegung happen at the speed of light. As de Broglie and Schroedinger pointed out, “an electron is actually a point-like charge which jitters about randomly within a certain volume” so it looks like a fuzzy sphere . The origin of parton mass by this mechanism was quantified mathematically around 1990 and explained thus:
“In this view the particle mass m is of dynamical origin, originating in parton-motion response to the electromagnetic zero-point fluctuations of the vacuum. [Rest mass] is therefore simply a special case of the general proposition that the internal kinetic energy of a system contributes to the effective mass of that system”. Similarly, inertial mass results “when an electromagnetically interacting particle is accelerated through the ZPF [because] a force is exerted on the charge; the force is directly proportional to the acceleration but acts in the direction to oppose it. In other words, the charge experiences an electromagnetic force as resistance to acceleration”. Then, in 1997, Haisch, Rueda & Puthoff confirmed that all “inertial mass may be due to a Lorenz force-like electro-magnetic interaction between charge at the quark or fundamental lepton level and the ZPF” [38, 70]. Furthermore, different resonant frequencies of different particles result in different masses because “Photons in the quantum vacuum with the same frequency as the jitter are much more likely to bounce off a particle…Higher resonance frequencies … probably mean a greater mass, as there are more high frequency vacuum photons to bounce off” . This occurs because the ZPE has a frequency cubed distribution down to the cutoff making it Lorentz invariant . The ZPE thus appears the same to two observers irrespective of their relative velocity .
The formulations of Haisch’s team show the parton’s oscillation in response to the ZPE gives it a rest mass which may be represented by the quantity m such that [71, 72]
Here, ω is the Zitterbewegung oscillation frequency of the particle, while Γ is the Abraham-Lorentz damping constant of the parton. The damping constant is then given by e2/(6πεm*c3), where e is the electronic charge and m* is the intrinsic mass of the parton which is interacting with the electromagnetic zero-point fields of the vacuum. Substituting for Γ in equation (12) then gives the rest-mass, m, in terms of the intrinsic mass, m*, so that
The interacting intrinsic mass may be simply an intrinsic energy, E* = m*c2, where E* remains unchanged over the parton’s lifetime. The parton’s rest-mass may then be expressed as
In (14), e2/ε is constant throughout the cosmos from (7A), except in strong gravitational fields . This exception happens as a result of a change in the stored energy of the system in a manner similar to a charged air capacitor taken into a region of differing dielectric constant. Appendix 3 then explains the “missing mass” on this basis. Now Dirac stated the Zitterbewegung occurs near c. Puthoff shows the oscillator resonance frequency is ω= kc, where k is inversely dependent upon parton size, as ZPE waves significantly smaller than this produce little translational movement of the parton . Hence k is independent of c. So we can write
This is in accord with equation (11). When (15) is linked with equation (7) the result is that
If (16) is to be expressed without the proportionality, it can be written that
This is an important conclusion. Since atomic masses m are proportional to 1/c2, then in Einstein’s equation, E = mc2, the energy, E, will be conserved in atomic processes as c varies.
Let us apply (16) to electrons in orbits and nucleons in orbitals. The kinetic energy of these particles is given by ½ mv2 where v is the tangential velocity. If m varies as 1/c2 it follows that v must vary as c. Birge’s statement about atomic frequencies follows, since orbit velocities
The formulation for electron velocity in the first Bohr orbit verifies this since A.N. Cox in Allen’s Astrophysical Quantities, page 9 (Springer Verlag, 2000) quotes this as being given by
In (17A), the proportionalities follow from (2), (7), (7A). In (17), the last proportionality follows from French’s comment that the frequency of light emitted by an electron transition to the ground state orbit “is identical with the frequency of revolution in the [ground state] orbit” . Therefore (17), shows that atomic frequencies generally obey (11) in the same way that photon frequencies do. Thus, when c is higher, atomic frequencies are higher and atomic time intervals, t, are shorter, so t is proportional to 1/c. This is supported by the fact that some forms of atomic time are defined in terms of the electron revolution time in the first Bohr orbit. Thus Cox, op. cit., page 9, gives the time, t, an electron takes for 1/(2π) revolutions in the first Bohr orbit as
The proportionalities follow from (2), (6), (7), (7A) and (16). From this it is clear that
This means that, as seen from the atomic point of view, the speed of light is a constant. This result has implications for Special Relativity which are discussed in Appendix 7.
The behavior of atomic masses given by (16) is supported by the officially declared values of electron or proton masses which show a consistent upward trend until about 1970. After that date, a flat point or slight decline occurred. The officially declared values for the electron rest mass, m, are listed in Table I and graphed in Figure 4. From (16) we can write m = kh2 where k is a proportionality constant with a numerical value around 0.20748. The variation in m in the fourth significant figure is in accord with this. The rest mass of the proton follows a precisely similar curve. All the mass data are given in reference .
Note that the Rydberg constant for an infinite nucleus, R∞, includes five time-varying quantities h, c, m, ε and e2, and combines them such that R∞ = [e2/ε]2 [2π2m/(ch3)]. Since [e2/ε]2 is constant throughout the cosmos and since hc is invariant, then we are left with the behaviour of the ratio m/h2 to be assessed before determining the behaviour of R∞. But from (16) m is proportional to h2, so that the ratio m/h2 is also a constant. This means that for all the above measured variations in h, c, m, ε and e2, the quantity R∞ should be unchanged. Indeed, the officially declared values for R∞ in Table I are stable to the 6th and 7th significant figure. This contrasts with the variations in the 4th figure for h, c, and m.
The graph of declared values of R∞ in Figure 5 shows a scatter of values around a mean position typical of a normal distribution of data for a fixed quantity, with, perhaps, a couple of outliers. This contrasts sharply with the trends evident in the quantities graphed in Figures 1 to 4. If these quantities were truly constant, their graphs should show similar characteristics to Figure 5. Statistical treatment of the data in reference  confirms the constancy of R∞.
Figures 1 to 5, plus (18) and (18A), suggest that atomic clock rates need a closer scrutiny. Two types of clock are used as timekeepers. First is the orbital clock which is gravitationally governed, and followed planets and spacecraft. Second is the atomic clock, which is defined in terms of atomic processes. This clock is the preferred option of science because its precision is at least 5 parts in 1015 using caesium. The important quantity determining orbit times gravitationally is Gm, where G is the Newtonian gravitational constant and m is the primary mass. With a cosmologically varying ZPE, it can be shown that Gm will remain constant for a given system . Because gravity and mass are both ZPE phenomenon, G and m must be related, and G bears the units of [meters3/(kilogram-seconds2)]. Since mass is in the denominator of G it will cancel out any changes in the product Gm. So for varying ZPE
Now Kepler’s third law for circular orbits, states that the orbit time, T, is given by:
Since Gm is invariant and orbit radius, r, is fixed, then, as the ZPE varies cosmologically, the orbital or gravitational clock marks time at a constant rate as in (19A). Equation (19) also requires that gravitational acceleration remains unchanged. By contrast, (17), (17A), (18) and (18A) indicate atomic clocks tick at a rate proportional to c and proportional to 1/U. This also applies to radiometric clocks as shown in Appendix 4. Thus, with increasing ZPE strength, atomic clocks tick more slowly. So if an atomic interval is given as t, we can write
If (20) is to be expressed without the proportionality, it can be stated that
In 1965, Kovalevsky noted that if gravitational and atomic clock rates were different, “then Planck’s constant as well as atomic frequencies would drift” . The data indicate this is so.
These data trends have been noted when comparisons have been made between orbital and atomic clocks. Lunar laser ranging using atomic clocks were compared with orbital data for the interval 1955 to 1981 by Van Flandern. He concluded that: “the number of atomic seconds in a[n orbital] interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing down with respect to [orbital] phenomena” . This work has continued in several observatories. One analysis stated:
“Recently, several independent investigators have reported discrepancies between the optical observations and the planetary ephemerides. The discussions by Yao and Smith (1988, 1991, 1993) [77, 78, 79], Krasinsky et al. (1993) , Standish & Williams (1990) , Seidelman et al. (1985, 1986) [82, 83], Seidelman (1992) , Kolesnik (1995, 1996) [85, 86], and Poppe et al. (1999)  indicate that [atomic clocks had] a negative linear drift [slowing] before 1960, and an equivalent positive drift [speeding up] after that date. A paper by Yuri Kolesnik (Kolesnik, 1996)  reports on positive drift of the planets relative to their ephemerides based on optical observations covering thirty years with atomic time. This study uses data from many observatories around the world, and all observations independently detect the planetary drifts. … [T]he planetary drifts Kolesnik and several other investigators have detected are based on accurate modern optical observations and they use atomic time. Therefore, these drifts are unquestionably real” . Now the data turnaround occurred near 1970. Van Flandern had considered data from the 1950’s up to the early 1980’s. He recently indicated that atomic clocks truly appeared to be slowing throughout the 1950’s and early 1960’s, but more recent data had shown the trend had certainly reversed. For this reason he at one time thought that his conclusions may have been mistaken .
The data plots by Kolesnik for Mercury, Venus and the Sun all suggest the slowing of atomic clocks reversed between 1960 and 1970 . Masreliez states atomic clocks are now gaining about 7 seconds every 50 years over the orbital clock . The Pioneer anomaly is of like magnitude, and may originate in the same run-rate discrepancy. A data plot for Mercury similar to Kolesnik is in Figure 6. Other planetary and solar data give concordant results. This supports atomic clock trends in (18A) and conclusions on Relativity reached in Appendix 7.
Appendix 4 shows radiometric clocks follow other atomic clocks. When radiometric dates are compared with orbital dates for historical artefacts, the effects of the oscillation of the cosmos are reliably recorded from about 1650 BC up to the present. The data discussed in Appendix 4 implies the origin of the ZPE curve, minus the oscillation, started sometime before 2600 BC. Figure 8 graphs those data.
 B. Setterfield, ‘General Relativity & the Zero Point Energy,’ Journal of Theoretics, October, 2003 available (March 7, 2006) online at http://www.journaloftheoretics.com/Links/Papers/BS-GR.pdf.
 B. Haisch, A. Rueda, H.E. Puthoff, Speculations in Science and Technology, 20, (1997) 99-114.