When Alan Montgomery and I decided to look at the measurements of c back in 1992 we gathered each and every published measurement from all known sources. This gave us a master list to start with. We could not find any additional published data points or we would have added them to this master list. The master list includes (a) published original actual measurements,(b) "reworkings" of many of these original data points by later investigators, (c) subsequent published values of c which are not new data but actually merely quoted from an original source. (For instance a value published in Encyclopedia Britannica is probably not an original data point).
Naturally we needed to work with a unique data set which includes each valid measurement only once. In some cases we had to decide whether the original measurement or a later reworking of a given measurement was to be preferred. In the data set a couple of data points were so far out of line with any other nearby measurements that they are easily classified as outliers.
What was humorous to Alan and me is that if we take all the data from the master list, that is, all the raw data from all sources, including spurious points, outliers, duplicates and do a statistical analysis of THAT (flawed) data set the statistical result is STILL non-constant c. (Alan and I did this for fun when we were selecting our "best data" set).
We did our calculations on an Excel spread sheet with embedded formulas so we could easily generate various subsets of the data. What we found was that our conclusion of non-constant c was still implied. In other words we tried everything we could think of to prove ourselves wrong, hence our continuing desire to receive valid criticisms of our statistical methods. If anyone has additional c data points that are not on our master list, let us know.
Our complete data sets are available: http://ldolphin.org/cdata.html
Setterfield: about 1000 AD, roughly. It was not very high. At most, it was about 320,000 km/s or so compared with 299,792 km/s now. There is geological and astronomical evidence that there is an oscillation. The cause of the oscillation may be a slight vibration of the cosmos about its mean position after the end of Creation Week.
Setterfield: As the Vc (variable light-speed) theory is currently formulated (I am working on some changes that may be relevant to this discussion), your assessment of the situation is basically correct. You ask if there is any evidence to show that 'c' has changed since measurements began. Indeed, I believe there is. The 1987 Report canvasses that whole issue along with all the measured data of light speed, Planck's constant and other physical quantities which are associated by theory, being tabulated and treated statistically. The data provide at least formal statistical evidence for a change in c. The error-bars are graphed and can be viewed in the 1987 Report. It is fruitful to look at Figure 1 where the c values by the aberration method from Pulkova Observatory are plotted. Here the same equipment was used for over a century. Importantly, the errors are larger than for most methods, but the drop in c far exceeds the errors. Also tabulated in the Report are the values of c by each method individually, and each of these also displays a drop with time. Again, when the same equipment was used on a second occasion, a lower value was always obtained at the later date.You ask about increased radiation doses. Appendix 2 of Behavior of the Zero Point Energy and Atomic Constants indicates that the energy-density of radiation was lower in the past. As a consequence, even though radioactive decay for example occurred more quickly, the higher decay rate had no more deleterious effects than the lower decay rate today. One area of study that is currently being pursued is the effect on pleochroic haloes. While the radii of the haloes will remain constant, it appears that the higher decay rate may give rise to over-exposed haloes. One might expect a systematic tendency for them to be found in that condition in the oldest rocks in the geological column. However, annealing by heat can eliminate this effect, even with temperatures as low as 300° C.
Setterfield: No, I am not saying c is no longer decaying, but the exponential decay has an oscillation superimposed upon it. The oscillation only became prominent as the exponential declined. The oscillation appears to have bottomed out about 1980 or thereabouts. If that is the case (and we need more data to determine this exactly), then light-speed should be starting to increase again. The first minimum value for light-speed was about 0.6 times its present value. This occurred about 2700 - 2800 BC. This is as close to ‘zero’ as it came. The natural oscillation brought it back up to a peak that had a maximum value of about 310,000 -320,000 km/sec. This was in the neighborhood of about 1000 AD. Today the value is 299,792.458 km/sec.On the graphs, today’s value is the base line. This does not equal either zero itself or a change of zero. It is just the base line.
The following question has to do with this graph.
Setterfield: It should be pointed out that the graph was developed by a comparison of radiometric dates compared with historical dates. The discrepancy between the two dates is largely due to the behaviour of lightspeed since that is the determining factor in the behaviour of all radiometric clocks. It will be noted that the data points show a scatter which is largely due to the 11 to 22 year solar cycle which affects the carbon 14 content in our atmosphere.
The problem as stated in the above question results from a misreading of the graph. If you look carefully, you will note that the years AD are on the left, while the years BC are on the right. In other words, reading the graph from left to right takes us backwards in time. If you notice, the graph on the left hand end finishes at today’s value for c.There is probably also some confusion due to the fact that there is an oscillation in the behaviour of c that is even picked up in the redshift and geological measurements. This oscillation is on top of the general decay pattern. What this graph does is to show that oscillation in detail over the last 4000 years. And yes, the speed of light was lower than its current value in BC times because of that oscillation. However, by about 2700 BC it was climbing above the present value. It is because of this oscillation that carbon 14 dates do not coincide with historical dates.
The reason for the oscillation is that a static universe in which the mass of atomic particles is increasing is stable, but will gently oscillate. This has been shown to be the case by Narliker and Arp in Astrophysical Journal Vol. 405 (1993), p.51. As you know, the quantized redshift evidence suggests that the universe is static and not expanding, but evidence of the oscillation is also there. In terms of the Zero Point Energy (ZPE) this oscillation effectively increases the energy density of the ZPE when the universe has its minimum size since the same amount of Zero Point Energy radiation is now contained in a smaller volume. The result is that lightspeed will be lower. Conversely, the energy density of the ZPE is less when the universe has its maximum diameter and the speed of light will be higher as a result.
I trust that this clears up the confusion.
for Statistician Alan Montgomery: I have yet to read a refutation of Aardsma's weighted uncertainties analysis in a peer reviewed Creationary journal. He came to the conclusion that the speed of light has been a constant. --A Bible college science Professor.Reply from Alan Montgomery: The correspondent has commented that nobody has refuted Dr. Aardsma work in the ICR Impact article.
In Aardsma's work he took 163 data from Barry Setterfield's monograph of 1987 and put a weighted regression line through the data. He found that the rate of decrease was negative but the deviation from zero was only about one standard deviation. This would normally not be regarded as significant enough to draw a statistical conclusion.
In my 1994 ICC paper I demonstrated among other things the foolishness of using all the data--those methods with and without sensitivity to the data to the question. You cannot use a ruler to measure the size of a bacteria. Second, I demonstrated that 92 of the data he used were not corrected to in vacuo and therefore his data was a bad mixture. One cannot draw firm conclusions from such a statistical test.
I must point out to the uninitiated in statistical studies that there is a difference between a regression line and a regression model. A regression model attempts to provide a viable statistical estimate of the function which the data exhibits. The requirements of a model are that it must be:
(1) a minimum variance (condition met by a regression line);
(2) homoskedastic - data are of the same variance (condition met by a weighted linear regression) and
(3) it must not be autocorrelated - the residuals must not leave a non-random pattern .
My paper thus went a step further in identifying a proper statistical representation of the data. If I did not point it out in my paper, I will point it out here. Aardsma's weighted regression line was autocorrelated and thus shows that the first two conditions and the data imposed a result which is undesirable if one is trying to mimic the data with a function. The data is not evenly distributed and the weights are not evenly distributed. These biases are such that the final 11 data determine the line almost completely. This being so caution must be exercised in interpreting the results. Considering the bias in the weights and their small size, data with any significant deviation from them should not be used. It adds a great deal of variance to the line yet never adds any contribution to its trend. In other words, the highly precise data determines the direction and size of the slope and the very low imprecision data makes any result statistically insignificant. Aardsma's results are not so much wrong as unreliable for interpretation.
The Professor may draw whatever conclusions he likes about Aardsma's work but those who disagree with the hypothesis of decreasing c have rarely mentioned his work since. I believe for good reason. ( October 14, 1999.)
Note added by Brad Sparks: I happened to be visiting ICR and Gerry Aardsma just before his first Acts & Facts article came out attacking Setterfield. I didn't know what he was going to write but I did notice a graph pinned on his wall. I immediately saw that the graph was heavily biased to hide above-c values because the scale of the graph made the points overlap and appear to be only a few points instead of dozens. I objected to this representation and Aardsma responded by saying it was too late to fix, it was already in press. It was never corrected in any other forum later on either, to my knowledge.
What is reasonable evidence for a decrease in c that would be convincing to you? Do you require that every single data point would have to be above the current value of c? Or perhaps you require validation by mainstream science, rather than any particular type or quality of evidence. We have corresponded in the past on Hugh Ross and we seemed to be in agreement. Ross' position is essentially that there could not possibly ever be any linguistic evidence in the Bible to overturn his view that "yom" in the Creation Account meant long periods; his position is not falsifiable. This is the equivalent of saying that there is no Hebrew word that could have been used for 24-hour day in Genesis 1 ("yom" is the only Hebrew word for 24-hour day and Ross is saying it could not possibly mean that in Gen. 1). Likewise, you seem to be saying there is no conceivable evidence even possible hypothetically for a decrease in c, a position that is not falsifiable.
Response from Alan Montgomery: At the time I presented my Pittsburgh paper (1994) I looked at Humphreys paper carefully as I knew that comments on the previous analyses was going to be made mandatory. It became very apparent to me that Humphreys was relying heavily on Aardsma's flawed regression line. Furthermore, Aardsma had not done any analysis to prove that his weighted regression line made sense and was not some vagary of the data. Humphreys paper was long on opinion and physics but he did nothing I would call an analysis. I saw absolutely nothing in the way of statistics which required response. In fact, the lack of anything substantial statistical backup was an obvious flaw in the paper. To back up what he says requires that the areas where the decrease in c is observed is defined and where it is constant. If Humphreys is right the c decreasing area should be early and the c constant area should be late. There may be ambiguous areas in between. This he did not do. I repeat that Humphreys expressed an opinion but did not demonstrate statistically that it was true.Secondly, Humphreys argument that the apparent derease in c can be explained by gradualy decreasing errors was explicitly tested for in my paper. The data was sorted by error bar size and regression lines were put through the data. By the mid-twentieth century the regression lines appeared to go insignificant. But then in the post 1945 data, the decrease became significant again in complete contradiction to his hypothesis. I would ask you who has in the last 5 years explained that? Name one person!
Third, the aberration values not only decrease to the accepted value but decrease even further. If Humphreys explanationis true why would those values continue to decrease? Why would those values continue to decrease in a quadratic just as the non-aberration values and why would the coefficients of the quadratic functions of a weighted regression line be almost identical and this despite the fact that the aberration and non-aberration data have highly disparate weights, centres of range and domain and weighted range and domain?
Fourth, if what Humphreys claims is true why are there many exceptions to his rule? Example, the Kerr Cell data is significantly below the accepted value. Therefore according to Humphreys there should be a slow increasing trend back to the accepted value. This simply is not true. The next values take a remarkable jump and then decrease. Humphreys made not even an attempt to explain this phenomenon, Why?
Humphreys paper is not at all acceptable as a statistical analysis. My statement merely reflected the truth about what he had not done. His explanation is a post hoc rationalization.
An additional response from a witness to the arguments regarding the data and Setterfield's responses (or accused lack of them):I have recently talked to Mr. Setterfield and he has referenced CRSQ vol 25, March 1989, as his response to the argument brought up regarding the abberation measurements. The following two paragraphs are a the direct quote from this part of the Setterfield response to the articles published in previous editions by Aardsma, Humphreys, and Holt critiquing the Norman-Setterfield 1987 paper . This part of the response is from page 194. After having read this, I am at a loss to understand why anyone would say Setterfield has not responded regarding this issue.
References for the above:
Simon Newcomb, "The Elements of the Four Inner Planets and the Fundamental Constants of Astronomy," in the Supplement to the American Ephemeris and Nautical Alamanac for 1897, p. 138 (Washington)
E.T. Whittaker, "Histories of Theories of Ether and Electricity," vol. 1, pp 23, 95 (1910, Dublin)
K.A. Kulikov, "Fundamental Constants of Astronomy," pp 81-96 and 191-195, translated from Russian and published for NASA by the Israel Program for Scientific Translations, Jerusalem. Original dated Moscow 1955.
(Nov. 13, 1999)
Response from Alan Montgomery: This is all drivel. For example, 299792.458 ± 27.5 is drivel. 299792.458 is the speed of light – 27.5 km/sec/year is a linear change to the speed of light . The two values are apples and oranges and any plus or minus stuck between them is meaningless. Or is .000092 = .0092% change per year. This looks very small. But it is like a salesman who sells you a stove for a few dollars per day for 3 years at 18% interest to the finance company. The real question is “Is the change statistically significant?” and according to the model I produced in my ICC paper
c(t) = c(0) + .03*T2 or c(t) = (1 + 10^(-7)*t2)*c(0) which is smaller than .000092 and still statistically significant.(3/17/03) additional from Alan Montgomery in reply to an additional email he received:
In 1994 I presented a paper on the secular decrease in the speed of light.
I used a weighted regression technique and found a quadratic which produced a regression model with significant quadratic coefficients.
This has been reviewed by statisticians with Ph.D s. Some have been convinced and others
Also a systematic search was made for obvious sources of the statistical decrease to find a correlation with some technique or era that might bias the statistics. The result was a systematic low value for the aberration methodology. When these values were segregated from the non-aberration values significant trends were found in both. Thus even the systematic errors that were found only reinforced the case.
These figures having stood for over 10 years, I wonder why so many people still doubt the result when none have produced a better analysis or better data.
Setterfield: In the 1987 Report, I only used r, not R2. N is the number of observations involved, right? Figure 1 is the Pulkova results, as listed for two hundred years from 1740-1940, with the majority of the observations being in the second hundred years. The lines you see are the error bars (for those unacquainted with a graph like this) and show the measurements have a trend beyond the potential error in any of the given observations.
This is representative of what was done and the number of total observations may well go past the tens of thousands and into the hundreds of thousands.
from Alan Montgomery: The statistical significance required is subjective. It depends on what you are using it for and how sure you have to be to take a significant action. It would be a different significance if you are selling light bulbs vs atomic reactors.
Several of the tests I did in my statistical model went beyond the p<.001.