The Redshift

 

What is the redshift?
What does 'quantized' mean?
Is the redshift really quantized?
What are the implications of a quantized redshift?
Discrepant Redshifts
Reactions
Other Explanations?
The Redshift/Lightspeed Connection
A Lumpy Microwave Background?
Question regarding work by the Gentrys
An Accelerating Expansion of the Universe?
The Redshift, ZPE, and Relativity
Redshift and Color
The Lightspeed/Redshift Curve
The Lightspeed Curve and the Oscillation
Wikipedia and the Red Shift

 

What is the redshift?

Setterfield:  In the simplest terms, 'redshift' is a term used to describe the fact that the light seen from distant galaxies shows up a little differently than it does here on earth.  Each element has a 'fingerprint' in light.  This is how we know which elements are in which stars.  There is a certain pattern of lines associated with each element which identifies it, something like the bar codes you see on products in stores.  Each pattern or 'bar code' of dark lines shows up with a spectrometer at a specific place along the rainbow of visible light. However, as we get further and further out in space, these identifying lines, while keeping the same identifying patterns for each element, appear shifted somewhat to the red end of the spectrum -- thus causing the light to appear redder than it would be here on earth.  A somewhat more technical explanation can be found in the article  Is the Universe Static or Expanding?

The redshift is an astronomical term that describes the shifting of the spectral lines of atoms towards the red end of the spectrum when compared with a laboratory standard here on earth. Consequently, the redshift, z, is defined as the measured change in wavelength, when compared with the standard, divided by that laboratory standard wavelength. If the change in wavelength is given by ∆λ and the laboratory standard wavelength is represented by λ, then the redshift is defined as z =  ∆λ/λ  [Couderc, 1960 p.10, 91; Audouze & Israel, 1985 p.382, 356]

 

Question:  Maybe I'm missing something, but I don't understand what an argument from redshift to an old universe would be.

Setterfield:  from Helen: the further out in space you go, the further back in time you go. So the further out we look, the more ancient the age we are viewing. The redshifting of light increases with distance, so the higher the redshift, the further back in time we are seeing. The argument is how long ago is it that we are looking at? This is where Barry's reserch comes in.

from Barry: The missing component here is the Hubble Constant, which converts redshift measurements to distance -- and therefore age -- given a constant value for the speed of light, and presuming that redshift is a measure of expansion.  Consequently, the value of the Hubble Constant has been a very hot topic in astronomy. There are two major views about the Hubble Constant.  One has a high value (about 65 km/sec/megaparsec), which results in a younger universe, and a low Hubble Constant camp (about 55 km/sec/megaparsec), which results in an age of around about 20 billion years. To this end, the Hubble Space Telescope has been of some help, giving a value of the Hubble Constant of around 65 km/sec/megaparsec, which translates into a universe age of around about 12-14 billion years.  As a consequence of this, discrepancies in the Hubble Constant are very vigorously 'discussed.'  In addition, anything which decouples the link between redshift and expansion is viewed with horror, as both of these factors link to the age of the universe through the Hubble Constant.

 

Question:  One of the most interesting things I read in your article is that the wavelength of light partakes in expansion in transit through space. I believe you referenced the Astrophysical Journal 429:491-498, 10 July, 1994.  I don't have immediate access to this journal. It was mentioned almost as a side issue, I want to know if you agree with it and how it serves your theory if you do. I'm also interested in the physical reasoning, observation, and mathematics of the journal article itself. Thank you.

Setterfield:   Yes! According to the Friedmann model of the universe, which is basically Einsteinian, as space expands, the wavelengths of light in transit become stretched also. This is how the redshift of light from distant galaxies is accounted for by standard Big Bang cosmology. The reference is correct, but any serious text on the redshift will give the same story. This does not serve our theory except for one point. The redshift has been shown by Tifft to be quantised. It goes in jumps of about 2.7 km/s. It is very difficult to account for this by a smooth expansion of space. Alternatively, if the quantisation is accepted as an intrinsic feature of emitted wavelengths (rather than wavelengths in transit), it means that the cosmos cannot be expanding (or contracting) as the exact quantisations would be "smeared out" as the wavelengths stretched or contracted in transit.

 

Question: I would like to know if the observed redshifts in the cosmos appear without exception. In other words, do all distant galaxies appear to be redshifted? If so, how do cosmologists account for this uniformity?

Setterfield:  Yes, all galaxies outside of our local group of galaxies show a redshift.  This redshift increases systematically with distance.  The current mainstream explanation is that it is due to the expansion of the universe, where light waves in transit are stretched, or lengthened, giving rise to a redder colour.  However, this explanation has a problem with the quantized redshift measurements first made in 1976 by William Tifft and since confirmed by a number of others.  These astronomers have shown that the redshift is not changing smoothly, as expansion would have produced.  Instead the measurements are going in a series of jumps.  I have explained this more fully below.  If something I have said there confuses you, please feel free to write back. 

 

Question:  Thank you for your response.  My confusion does not arise from anything I have read on your website.  I am trying to understand why a redshift that is so inclusive is considered to be Doppler in nature.  Please excuse my lack of formal education in this area.  However, this train of reasoning would have to assume that "all" galaxies are moving away from us.  If expansion is the reason for the redshift, then only those galaxies that are moving away should be redshifted.  I would think that expansion would dictate that galaxies are moving away from each other, and not all in the same direction. One has to ask if it is logical to assume that everything is moving in the same direction as the lightwaves are stretched.  Am I missing something?

Setterfield:  Thank you for your additional enquiry.  Your "lack of formal education," is, perhaps, adding to your common sense.

As far as the redshift is concerned, it is all inclusive, and it is progressive. That is to say, galaxies further away have a higher redshift. The further away, the greater the redshift. A good picture of what is being envisaged by astronomers is to blow up a balloon. If there were evenly spaced spots marked on the balloon, and a particular one was watched as the balloon expanded out, it would be noticed that every spot moved away from every other spot. The further away they were, the greater the distance each one had moved in a given time. This is the picture that astronomers have of the expanding universe. 

Now there are two ways to achieve this result. First – as Einstein proposed -- the fabric of space is static and the galaxies are moving through it.  This would give rise to a Doppler shift.  Second, space itself is expanding and carrying the galaxies with it, as LeMaitre proposed.  In this case, light waves are stretched as they travel through an expanding space, and the redshift arises from this stretching.  This may give you a better idea of what has been proposed by astronomers.

 

What does 'quantized' mean?

Setterfield:  When we refer to a series of measurements being quantized, we are referring to the fact that they are showing up in jumps and not as a smooth, continuous function.  It would be as if an accelerating car were seen as going 5 mph, then 10 mph, then 15 mph, and so on, but not at any speeds in between.  This sort of series of jumps in the redshift measurements has been recorded.  It would be expected that they should be like a car when it is accelerating:  showing a smooth series of measurements.  But this is evidently not what the data is showing.  It is for this reason that the assumption of an expanding universe based on redshift measurements may be false.  Could the universe expand in jumps?

 

Is the Redshift Really quantized?

Setterfield: A genuine redshift anomaly seems to exist, one that would cause a re-think about cosmological issues if the data are accepted. Let’s look at this for just a moment. As we look out into space, the light from galaxies is shifted towards the red end of the spectrum.  The further out we look, the redder the light becomes.  The measure of this redshifting of light is given by the quantity z, which is defined as the change in wavelength of a given spectral line divided by the laboratory standard wavelength for that same spectral line. Each atom has its own characteristic set of spectral lines, so we know when that characteristic set of lines is shifted further down towards the red end of the spectrum.  This much was noted in the early 1920’s. Around 1929, Hubble noted that the more distant the galaxy was, the greater was the value of the redshift, z.  Thus was born the redshift/distance relationship. It came to be accepted as a working hypothesis that z might be a kind of Doppler shift of light because of universal expansion.  In the same way that the siren of a police car drops in pitch when it races away from you, so it was reasoned that the redshifting of light might represent the distant galaxies racing away from us with greater velocities the further out they were. The pure number z, then was multiplied by the value of lightspeed in order to change z  to a velocity. However, Hubble was discontent with this interpretation. Even as recently as the mid 1960’s Paul Couderc of the Paris Observatory expressed misgivings about the situation and mentioned that a number of astronomers felt likewise. In other words, accepting z as a pure number was one thing; expressing it as a measure of universal expansion was something else.

It is at this point that Tifft’s work enters the discussion. In 1976, William Tifft, an astronomer from Arizona, started examining redshift values. The data indicated that the redshift, z, was not a smooth function but went in a series of jumps.   Between successive jumps the redshift remained fixed at the value attained at the last jump.  The editor of the Astrophysical Journal who published the first article by Tifft, made a comment in a footnote to the effect that they did not like the idea, but referees could find no basic flaw in the presentation, so publication was reluctantly agreed to. Further data came in supporting z quantisation, but the astronomical community could not generally accept the data because the prevailing interpretation of z was that it represented universal expansion, and it would be difficult to find a reason for that expansion to occur in “jumps”. In 1981 the extensive Fisher-Tully redshift survey was published, and the redshifts were not clustered in the way that Tifft had suggested. But an important development occurred in 1984 when Cocke pointed out that the motion of the Sun and solar system through space had a genuine Doppler shift that added to or subtracted from every redshift in the sky.  Cocke pointed out that when this true Doppler effect was removed from the Fisher-Tully observations, there were redshift “jumps” or quantisations globally across the whole sky, and this from data that had not been collected by Tifft.  In the early 1990’s Bruce Guthrie and William Napier of Edinburgh Observatory specifically set out to disprove redshift quantisation using the best enlarged example of accurate hydrogen line redshifts. Instead of disproving the z quantisation proposal, Guthrie and Napier ended up in confirming it.  The quantisation was supported by a Fourier analysis and the results published around 1995. The published graph showed over 60 successive peaks and troughs of precise redshift quantisations. There could be no doubt about the results.  Comments were made in New Scientist, Scientific American and a number of other lesser publications, but generally, the astronomical community treated the results with silence.

If redshifts come from an expanding cosmos, the measurements should be distributed smoothly like the velocity of cars on a highway. The quantised redshifts are similar to every car traveling at some multiple of 5 miles per hour. Because the cosmos cannot be expanding in jumps, the conclusion to be drawn from the data is that the cosmos is not expanding, nor are galaxies racing away from each other. Indeed, at the Tucson Conference on Quantization in April of 1996, the comment was made that "[in] the inner parts of the Virgo cluster [of galaxies], deeper in the potential well, [galaxies] were moving fast enough to wash out the quantization." In other words, the genuine motion of galaxies destroys the quantisation effect, so the quantised redshift it is not due to motion, and hence not to an expanding universe. This implies that the cosmos is now static after initial expansion. Interestingly, there are about a dozen references in the Scriptures which talk about the heavens being created and then stretched out. Importantly, in every case except one, the tense of the verb indicated that the "stretching out" process was completed in the past. This is in line with the conclusion to be drawn from the quantised redshift. Furthermore, the variable lightspeed (Vc) model of the cosmos gives an explanation for these results, and can theoretically predict the size of the quantisations to within a fraction of a kilometer per second of that actually observed. This seems to indicate that a genuine effect is being dealt with here.

One basis on which Guthrie and Napier’s conclusions have been questioned and/or rejected concerns the reputed "small" size of the data set.  It has been said that if the size of the data set is increased, the anomaly will disappear. Interestingly, the complete data set used by Guthrie and Napier set comprised 399 values.  This was an entirely different data set than the many used by Tifft.  Thus there is no 'small' data set, but a series or rather large ones.  Every time a data set has been increased in size, the anomaly becomes more prominent.

When Guthrie and Napier's material was statistically treated by a Fourier analysis a very prominent “spike” emerged in the power spectrum, which supported redshift quantisation at very high confidence level. The initial study was done with a smaller data set and submitted to Astronomy and Astrophysics. The referees asked them to repeat the analysis with another set of galaxies.  They did so, and the same quantisation figure emerged clearly from the data, as it did from both data sets combined.  As a result, their full analysis was accepted and the paper published.  It appears that the full data set was large enough to convince the referees and the editor that there was a genuine effect being observed – a conclusion that other publications acknowledged by reporting the results. (Guthrie, B.N.G. and Napier, W.M. 1996 Astron. Astrophys239: 33)

It is never good science to ignore anomalous data or to eliminate a conclusion because of some presupposition. Sir Henry Dale, one time President of the Royal Society of London, made an important comment in his retirement speech. It was reported in Scientific Australian for January 1980, p.4. Sir Henry said: "Science should not tolerate any lapse of precision, or neglect any anomaly, but give Nature's answers to the world humbly and with courage." To do so may not place one in the mainstream of modern science, but at least we will be searching for truth and moving ahead rather than maintaining the scientific status quo.

Quantized Redshifts and the Zero Point Energy may also be of help here, as well as Zero Point Energy and the Redshift. Two other interesting articles may be found at  More Evidence for Galactic "Shells" or "Something Else" and Galactic Shell Game.

 

Question:  How precise was Tifft's work?  Was he just seeing something he wanted to see?

Setterfield:  Firstly, the initial quantisation that Tifft picked up was around 72 km/s. Later he noticed that there was one about half that, near 36 km/s, and another around 24 km/s. Guthrie and Napier in 1991 set out to disprove the thesis using neutral hydrogen redshifts that had an accuracy of better than 4 km/s. Instead, their final assessment confirmed a prominent 37.5 km/s quantisation, supported by a Fourier analysis, which showed the significance of the quantisation to be of the order of 1 million [Progress in New Cosmologies, Plenum Press, 1991; Scientific American 267:6 (1992) p. 19; New Scientist, 9 July (1994), p.17; Science 271 (1996), 759]. An assessment of this study is given by Halton Arp in Seeing Red, pp. 198-200, (Apeiron, 1998), along with Guthrie and Napier's data graphs.

Meanwhile, Tifft had continued to work with data using the hydrogen 21 cm line. As B. M. Lewis of the Jodrell Bank and MERLIN network points out, the redshifts measured in this range have an accuracy in excess of 0.1 km/s at a very high signal to noise ratio [Lewis, Observatory 107 (1987), 201]. Using these data Tifft determined that the earlier quantisation figures were multiples of a "presumably more basic common interval near 8 km/s." [Properties of the Redshift III, Astrophysical Journal, 382:396-415, (1991) Dec. 1.]. In that paper, Tifft's analyses conclude that this common interval was in fact 7.997 km/s, and that the structure of the diagrams indicated that the final quantisation figure was 1/3 rd of that. This gives a value for the most basic quantisation of 8/3 = 2.667 km/s or, if the statistically treated data are accepted, 7.997/3 = 2.665 km/s. The cDK work indicates a basic quantisation of 2.671 km/s, which, when multiplied by 3, gives a value of 8.013 km/s, which is very close to Tifft's "basic common interval near 8 km/s" that is supported by the 21 cm line data and well within its limits of accuracy. 

Question:  I heard that there is published evidence that the Red Shift is quantized. Can you explain this in lay terms?

Setterfield: The following quotation concerning this phenomenon is from "Quantized Galaxy Redshifts" by William G. Tifft & W. John Cocke, University of Arizona, Sky & Telescope Magazine, Jan., 1987, pgs. 19-21: As the turn of the next century approaches, we again find an established science in trouble trying to explain the behavior of the natural world. This time the problem is in cosmology, the study of the structure and "evolution" of the universe as revealed by its largest physical systems, galaxies and clusters of galaxies. A growing body of observations suggests that one of the most fundamental assumptions of cosmology is wrong.

Most galaxies' spectral lines are shifted toward the red, or longer wavelength, end of the spectrum. Edwin Hubble showed in 1929 that the more distant the galaxy, the larger this "redshift". Astronomers traditionally have interpreted the redshift as a Doppler shift induced as the galaxies recede from us within an expanding universe. For that reason, the redshift is usually expressed as a velocity in kilometers per second.

One of the first indications that there might be a problem with this picture came in the early 1970's. William G. Tifft, University of Arizona noticed a curious and unexpected relationship between a galaxy's morphological classification (Hubble type), brightness, and red shift. The galaxies in the Coma Cluster, for example, seemed to arrange themselves along sloping bands in a redshift vs. brightness diagram. Moreover, the spirals tended to have higher redshifts than elliptical galaxies. Clusters other than Coma exhibited the same strange relationships.

By far the most intriguing result of these initial studies was the suggestion that galaxy redshifts take on preferred or "quantized" values. First revealed in the Coma Cluster redshift v.s. brightness diagram, it appeared as if redshifts were in some way analogous to the energy levels within atoms.

These discoveries led to the suspicion that a galaxy's redshift may not be related to its Hubble velocity alone. If the redshift is entirely or partially non-Doppler (that is, not due to cosmic expansion), then it could be an intrinsic property of a galaxy, as basic a characteristic as its mass or luminosity. If so, might it truly be quantized?

Clearly, new and independent data were needed to carry this investigation further. The next step involved examining the rotation curves of individual spiral galaxies. Such curves indicate how the rotational velocity of the material in the galaxy's disk varies with distance from the center.

Several well-studied galaxies, including M51 and NGC 2903, exhibited two distinct redshifts. Velocity breaks, or discontinuities, occurred at the nuclei of these galaxies. Even more fascinating was the observation that the jump in redshift between the spiral arms always tended to be around 72 kilometers per second, no matter which galaxy was considered. Later studies indicated that velocity breaks could also occur at intervals that were 1/2, 1/3, or 1/6 of the original 72 km per second value.

At first glance it might seem that a 72 km per second discontinuity should have been obvious much earlier, but such was not the case. The accuracy of the data then available was insufficient to show the effect clearly. More importantly, there was no reason to expect such behavior, and therefore no reason to look for it. But once the concept was defined, the ground work was laid for further investigations.

The first papers in which this startling new evidence was presented were not warmly embraced by the astronomical community. Indeed, an article in the Astrophysical Journal carried a rare note from the editor pointing out that the referees "neither could find obvious errors with the analysis nor felt that they could enthusiastically endorse publication." Recognizing the far-reaching cosmological implications of the single-galaxy results, and undaunted by criticism from those still favoring the conventional view, the analysis was extended to pairs of galaxies.

Two galaxies physically associated with one another offer the ideal test for redshift quantization; they represent the simplist possible system. According to conventional dynamics, the two objects are in orbital motion about each other. Therefore, any difference in redshift between the galaxies in a pair should merely reflect the difference in their orbital velocities along the same line of sight. If we observe many pairs covering a wide range of viewing angles and orbital geometries, the expected distribution of redshift differences should be a smooth curve. In other words, if redshift is solely a Doppler effect, then the differences between the measured values for members of pairs should show no jumps.

But this is not the situation at all. In various analyses the differences in redshift between pairs of galaxies tend to be quantized rather than continuously distributed. The redshift differences bunch up near multiples of 72 km per second. Initial tests of this result were carried out using available visible-light spectra, but these data were not sufficiently accurate to confirm the discovery with confidence. All that changed in 1980 when Steven Peterson, using telescopes at the National Radio Astronomy Observatory and Arecibo, published a radio survey of binary galaxies made in the 21-cm emission of neutral hydrogen.

Wavelength shifts can be pegged much more precisely for the 21cm line than for lines in the visible portion of the spectrum. Specifically, redshifts at 21 cm can be measured with an accuracy better than the 20 km per second required to detect clearly a 72 km per second periodicity.

Red shift differences between pairs group around 72, 144 and 216 km per second. Probability theory tells us that there are only a few chances in a thousand that such clumping is accidental. In 1982 an updated study of radio pairs and a review of close visible pairs demonstrated this same periodic pattern at similarly high significance levels.

Radio astronomers have examined groups of galaxies as well as pairs. There is no reason why the quantization should not apply to larger collections of galaxies, so redshift differentials within small groups were collected and analyzed. Again a strongly periodic pattern was confirmed.

The tests described so far have been limited to small physical systems; each group or pair occupies only a tiny region of the sky. Such tests say nothing about the properties of redshifts over the entire sky. Experiments on a very large scale are certainly possible, but they are much more difficult to carry out.

One complication arises from having to measure galaxy redshifts from a moving platform. The motion of the solar system, assuming a doppler interpretation, adds a real component to every redshift. When objects lie close together in the sky, as with pairs and groups, this solar motion cancels out when one redshift is subtracted from another, but when galaxies from different regions of the sky are compared, such a simple adjustment can no longer be made. Nor can we apply the difference technique; when more than a few galaxies are involved, there are simply too many combinations. Instead we must perform a statistical test using redshifts themselves.

As these first all-sky redshift studies began, there was no assurance that the quantization rules already discovered for pairs and groups would apply across the universe. After all, galaxies that were physically related were no longer being compared. Once again it was necessary to begin with the simplest available systems. A small sample of dwarf irregular galaxies spread around the sky was selected.

Dwarf irregular galaxies are low-mass systems that have a significant fraction of their mass tied up in neutral hydrogen gas. They have little organized internal or rotational motion and so present few complications in the interpretation of their redshifts. In these modest collections of stars we might expect any underlying quantum rules to be the least complex. Early 20th century physicists chose a similar approach when they began their studies of atomic structure; they first looked at hydrogen, the simplest atom.

The analysis of dwarf irregulars was revised and improved when an extensive 21-cm redshift survey of dwarf galaxies was published by J. Richard Fisher and R. Brent Tully. Once the velocity of the solar system was accounted for, the irregulars in the Fisher-Tully Catalogue displayed an extraordinary clumping of redshifts. Instead of spreading smoothly over a range of values, the redshifts appeared to fall into discrete bins separated by intervals of 24 km per second, just 1/3 of the original 72 km per second interval. The Fisher-Tully redshifts are accurate to about 5 km per second. At this small level of uncertainty the likelihood that such clumping would randomly occur is just a few parts in 100,000.

Large-scale redshift quantization needed to be confirmed by analyzing redshifts of an entirely different class of objects. Galaxies in the Fisher-Tully catalogue that showed large amounts of rotation and interval motion (the opposite extreme from the dwarf irregulars) were studied.

Remarkably, using the same solar-motion correction as before, the galaxies' redshifts again bunched around certain specific values. But this time the favored redshifts were separated by exactly 1/2 of the basic 72 km per second interval. This is clearly evident. Even allowing for this change to a 36 km per second interval, the chance of accidentally producing such a preference is less than 4 in 1000. It is therefore concluded that at least some classes of galaxy redshifts are quantized in steps that are simple fractions of 72 km per second.

Current cosmological models cannot explain this grouping of galaxy redshifts around discrete values across the breadth of the universe. As further data are amassed the discrepancies from the conventional picture will only worsen. If so, dramatic changes in our concepts of large-scale gravitation, the origin and "evolution" of galaxies, and the entire formulation of cosmology would be required.

Several ways can be conceived to explain this quantization. As noted earlier, a galaxys' redshift may not be a Doppler shift, it is the currently commonly accepted interpretation of the red shift, but there can be and are other interpretations. A galaxys' redshift may be a fundamental property of the galaxy. Each may have a specific state governed by laws, analogues to those in quantum mechanics that specify which energy states atoms may occupy. Since there is relatively little blurring on the quantization between galaxies, any real motions would have to be small in this model. Galaxies would not move away from one another; the universe would be static instead of expanding.

This model obviously has implications for our understanding of redshift patterns within and among galaxies. In particular it may solve the so-called "missing mass" problem. Conventional analysis of cluster dynamics suggest that there is not enough luminous matter to gravitationally bind moving galaxies to the system.

 

What are the implications of a quantized redshift?

Setterfield:  If redshifts come from an expanding cosmos, the measurements should be distributed smoothly, but they are not.  They are showing up as clumps, or quantized groupings.  Because the cosmos cannot be expanding in jumps, the conclusion to be drawn from the data is that the cosmos is not expanding, nor are galaxies racing away from each other. Indeed, at the Tucson Conference on Quantization in April of 1996, the comment was made that “[in] the inner parts of the Virgo cluster [of galaxies], deeper in the potential well, [galaxies] were moving fast enough to wash out the quantization.” In other words, the genuine motion of galaxies destroys the quantisation effect, so the quantised redshift it is not due to motion, and hence not to an expanding universe. This implies that the cosmos is now static after initial expansion. Interestingly, there are about a dozen references in the Scriptures which talk about the heavens being created and then stretched out. Importantly, in every case except one, the tense of the verb indicated that the “stretching out” process was completed in the past. This is in line with the conclusion to be drawn from the quantised redshift. Furthermore, the variable lightspeed model of the cosmos gives an explanation for these results, and can theoretically predict the size of the quantisations to within a fraction of a kilometer per second of that actually observed. This seems to indicate that a genuine effect is being dealt with here.

You will find a number of my papers dealing with the implications of the quantized redshift in the Research Papers section of this website.

 

Question:  Does the redshift drop with time the way light speed does?

Setterfield:  Yes, the Vc model predicts the redshift should drop with time, but the period over which this change occurs for any given galaxy or cluster is more difficult to determine, and depends on modeling.  Nevertheless, a drop in some redshifts over time has been noted by Tifft. He has noted a decrease in redshift values with time in several associated galaxies.  In the Astrophysical Journal, Vol. 382:396-415, December 1st 1991, he records the groups where one quantum change has occurred.  He emphasizes that all the older data recorded higher redshifts. The time period between the recordings of redshift was about 10 years.  However, we do not know how long the galaxies had persisted at the previous redshift, before the change.

 

Discrepant Redshifts

Cosmic Dark Energy?

Barry Setterfield  (April 11, 2001)

There has been much interest generated in the press lately over the analysis by Dr. Adam G. Riess and Dr. Peter E. Nugent of the decay curve of the distant supernova designated as SN 1997ff. In fact, over the last two years, a total of four supernovae have led to the current state of excitement. The reason for the surge of interest is the distances that these supernovae are found to be when compared with their redshift, z. According to the majority of astronomical opinion, the relationship between an object's distance and its redshift should be a smooth function. Thus, given a redshift value, the distance of an object can be reasonably estimated.

One way to check this is to measure the apparent brightness of an object whose intrinsic luminosity is known. Then, since brightness falls off by the inverse square of the distance, the actual distance can be determined. For very distant objects something of exceptional brightness is needed. There are such objects that can be used as 'standard candles', namely supernovae of Type Ia. They have a distinctive decay curve for their luminosity after the supernova explosion, which allows them to be distinguished from other supernovae.

In this way, the following four supernovae have been examined as a result of photos taken by the Hubble Space Telescope. SN 1997ff at z = 1.7; SN 1997fg at z = 0.95; SN 1998ef at z = 1.2; and SN 1999fv also at z = 1.2. The higher the redshift z, the more distant the object should be. Two years ago, the supernovae at z = 0.95 and z = 1.2 attracted attention because they were FAINTER and hence further away than expected. This led cosmologists to state that Einstein's Cosmological Constant must be operating to expand the cosmos faster than its steady expansion from the Big Bang. Now the object SN 1997ff, the most distant of the four, turns out to be BRIGHTER than expected for its redshift value. This interesting turn of events has elicited the following comments from Adrian Cho in New Scientist for 7 April, 2001, page 6 in an article entitled "What's the big rush?"

Two years ago, two teams of astronomers reported that distant stellar explosions known as type Ia supernovae, which always have the same brightness, appeared about 25 per cent dimmer from Earth than expected from their red shifts. That implied that the expansion of the Universe has accelerated. This is because the supernovae were further away than they ought to have been if the Universe had been expanding at a steady rate for the billions of years since the stars exploded. But some researchers have argued that other phenomena might dim distant supernovae. Intergalactic dust might soak up their light, or type Ia supernovae from billions of years ago might not conform to the same standard brightness they do today."

"This week's supernova finding seems to have dealt a severe blow to these [alternative] arguments [and supports] an accelerating Universe. The new supernova's red shift implies it is 11 billion light years away, but it is roughly twice as bright as it should be. Hence it must be significantly closer than it would be had the Universe expanded steadily. Neither dust nor changes in supernova brightness can easily explain the brightness of the explosion."

"Dark energy [the action of the Cosmological Constant, which acts in reverse to gravity] can, however. When the Universe was only a few billion years old, galaxies were closer together and the pull of their gravity was strong enough to overcome the push of dark energy and slow the expansion. A supernova that exploded during this period would thus be closer than its red shift suggests. Only after the galaxies grew farther apart did dark energy take over and make the Universe expand faster. So astronomers should see acceleration change to deceleration as they look farther back in time. "This transition from accelerating to decelerating is really the smoking gun for some sort of dark energy," Riess says.

Well, that is one option. Interestingly, there is another option well supported by other observational evidence. For the last two decades, astronomer William Tifft of Arizona has pointed out repeatedly that the redshift is not a smooth function at all but is, in fact, going in "jumps", or is quantised. In other words, it proceeds in a steps and stairs fashion. Tifft's analyses were disputed, so in 1992 Guthrie and Napier did a study to disprove the matter. They ended up agreeing with Tifft. The results of that study were themselves disputed, so Guthrie and Napier conducted an exhaustive analysis on a whole new batch of objects. Again, the conclusions confirmed Tifft's contention. The quantisations of the redshift that were noted in these studies were on a relatively small scale, but analysis revealed a basic quantisation that was at the root of the effect, of which the others were simply higher multiples. However, this was sufficient to indicate that the redshift was probably not a smooth function at all. If these results were accepted, then the whole interpretation of the redshift, namely that it represented the expansion of the cosmos by a Doppler effect on light waves, was called into question. This becomes apparent since there was no good reason why that expansion should go in a series of jumps, anymore than cars on a highway should travel only in multiples of 5 kilometres per hour.

In 1990, Burbidge and Hewitt reviewed the observational history of preferred redshifts for extremely distant objects. Here the quantisation or periodicity was on a significantly larger scale. Objects were clumping together in preferred redshifts across the whole sky. These redshifts were listed as z = 0.061, 0.30, 0.60, 0.96, 1.41 and 1.96 [G. Burbidge and A. Hewitt, Astrophysical Journal, vol. 359 (1990), L33]. In 1992, Duari et al. examined 2164 objects with redshifts ranging out as far as z = 4.43 in a statistical analysis [Astrophysical Journal, vol. 384 (1992), 35]. Their analysis eliminated some suspected periodicities as not statistically significant. Only two candidates were left, with one being mathematically precise at a confidence interval exceeding 99% in four tests over the entire range. Their derived formula confirmed the redshift peaks of Burbidge and Hewitt as follows: z = 0.29, 0.59, 0.96, 1.42, 1.98. When their Figure 4 is examined, the redshift peaks are seen to have a width of about z = 0.0133.

A straightforward interpretation of this periodicity is that the redshift itself is going in a predictable series of steps and stairs on a large as well as a small scale. This is giving rise to the apparent clumping of objects at preferred redshifts. The reason is that on the flat portions of the steps and stairs pattern, the redshift remains essentially constant over a large distance, so many objects appear to be at the same redshift. By contrast, on the rapidly rising part of the pattern, the redshift changes dramatically over a short distance, and so relatively few objects will be at any given redshift in that portion of the pattern. From the Duari et al. analysis the steps and stairs pattern of the redshift seems to be flat for about z = 0.0133, and then climbs steeply to the next step.

These considerations are important in the current context. As noted above by Reiss, the objects at z = 0.95 and 1.2 are systematically faint for their assumed redshift distance. By contrast, the object at z = 1.7 is unusually bright for its assumed redshift distance. Notice that the object at z = 0.95 is at the middle of the flat part of the step according to the redshift analyses, while z = 1.2 is right at the back of the step, just before the steep climb. Consequently for their redshift value, they will be further away in distance than expected, and will therefore appear fainter. By contrast, the object at 1.7 is on the steeply rising part of the pattern. Because the redshift is changing rapidly over a very short distance astronomically speaking, the object will be assumed to be further away than it actually is and will thus appear to be brighter than expected.

These recent results therefore verify the existence of the redshift periodicities noted by Burbidge and Hewitt and statistically confirmed by Duari et al. They also verify the fact that redshift behaviour is not a smooth function, but rather goes in a steps and stairs pattern. If this is accepted, it means that the redshift is not a measure of universal expansion, but must have some other interpretation.

The research that has been conducted on the changing speed of light over the last 10 years has been able to replicate both the basic quantisation picked up by Tifft, and the large-scale periodicities that are in evidence here. On this research, the redshift and light-speed are related effects that mutually derive from changing vacuum conditions. The evidence suggests that the vacuum zero-point energy (ZPE) is increasing as a result of initial expansion of the cosmos. It has been shown by Puthoff [Physical Review D 35:10 (1987), 3266] that the ZPE is maintaining all atomic structures throughout the universe. Therefore, as the ZPE increases, the energy available to maintain atomic orbits increases. Once a quantum threshold has been reached, every atom in the cosmos will assume a higher energy state for a given orbit and so the light emitted from those atoms will be bluer than those in the past. Therefore as we look back to distant galaxies, the light emitted from them will appear redder in quantised steps. At the same time, since the speed of light is dependent upon vacuum conditions, it can be shown that a smoothly increasing ZPE will result in a smoothly decreasing light-speed. Although the changing ZPE can be shown to be the result of the initial expansion of the cosmos, the fact that the quantised effects are not "smeared out" also indicate that the cosmos is now static, just as Narliker and Arp have demonstrated [Astrophysical Journal vol. 405 (1993), 51]. In view of the dilemma that confronts astronomers with these supernovae, these alternative explanations may be worth serious examination.

 

Reactions

Question:  What is the reaction of the mainstream scientific community now?

Setterfield:  There is still a lot of denial, even though some are considering it more thoughtfully as time goes by.

As far as the effect on cosmology is concerned I need only point out the response of J. Peebles, a cosmologist from Princeton University. He is quoted as saying “I’m not being dogmatic and saying it cannot happen, but if it does, it’s a real shocker.” M. Disney, a galaxy specialist from the University of Wales is reported as saying that if the redshift was indeed quantised “It would mean abandoning a great deal of present research.” [Science Frontiers, No. 105, May-June 1996].  For that reason, this topic inevitably generates much heat, but it would be nice if the light that inevitably comes out of it could also be appreciated.

 

Comment: Tifft's argument is that galactic redshifts have a superimposed periodicity. If the redshift is caused by a Doppler effect, and if the matter distribution is periodic, then the redshifts will be periodic too, just as Tifft argues that they are (although Tifft's results are not all that strong either). So, even if we accept the "quantization" (poor semantics, "periodicity" is better and more descriptive of what is actually observed), it works just fine in a modified Big Bang cosmology (one has to find a way to construct a periodic mass distribution on large scales, which should not be an onerous task).

Setterfield:  There are in fact periodicities as well as redshift quantisation effects.  The periodicities are genuine galaxy-distribution effects.  However, they all involve high redshift differences such as repeats at z = 0.0125 and z = 0.0565.  The latter vale involved 6,200 quantum jumps of Tifft's basic value and reflects the large scale structuring of the cosmos at around 850 million light-years.  The smaller value is around 190 million light-years.  This is the approximate distance between super-clusters.

The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them.  The lowest observed redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster.  This comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years.  It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do with galaxy size or distribution.  Finally, Tifft has noted that there are redshift quantum jumps within individual galaxies.  This indicates that the effect has nothing to do with clustering.

 

Comment: I was able to get some input on the idea of non-cosmological redshifts from Dr. Virginia Trimble, one who is considered an expert in the field (and even that's an understatement).  Here is the main thrust of what she had to say:

The main goal behind the idea of non-cosmological redshifts was to save steady-state theory (SST).  Quasars are more numerous at high redshift (z), so if z corresponds to age/distance, then the relative populations of quasars are not uniform in time or space, as required by SST.  However, if z does not correlate with time or distance, then SST is not necessarily proven false by an uneven z-distribution of quasars.  However, SST was shown to be false for other reasons, so there is no reason to require that quasar populations are indicators of non-cosmological redshift.

Setterfield:  The quantised redshift that we are talking about is not dealing with quasars.  They are too far out.  In my work I accept the redshift/distance relationship and also the initial expansion of the cosmos. What Virginia Trimble is talking about here is the large-scale periodicities in the quasar distribution.  This is a separate effect from quantisation.

 

Comment continued: Though Dr. Trimble mentioned Arp, she didn't specifically address his contention that objects at different z are connected.  However, she did discuss in much greater detail the alleged quantization of z, and her rebuttal seemed pretty sound to me: First, the methodology of those studying quantized redshift is suspect.  It seems that their hypothesis was tested on the same dataset from which the hypothesis was derived.  When one is doing a statistical analysis, this is an absolute no-no. Similarly, their auxiliary hypotheses of additional quantizations were not well-defined before their analysis--in essence, they drew the target around the arrow after it was shot at the wall, rather than defining a target ahead of time.

Setterfield:  She states that the hypothesis was tested from the same data set from which the hypothesis was derived.  This is not the case.  Data came from  the Coma cluster, the Virgo cluster, and the Local Supercluster.  All were different data sets.  In fact, Guthrie and Napier established quantisation from a different data set altogether, apart from Tifft's.  So did Cocke's analysis.  Therefore the claim about using the same data set is false.  No one drew any target around any landed arrow.  The theory was determined after the data had been collected, and not before.  Please also keep in mind that Guthrie and Napier had set out to DISprove Tifft and ended up agreeing with him.

 

Comment continued: Second, selection effects were not appropriately accounted for by those seeking to prove quantized redshift.  Redshift is determined by observing spectral lines of specific elements in a source, and noting how far they have moved from their laboratory (rest frame) wavelength.  If I remember correctly, the Lyman alpha spectral line moves into the visible spectrum at redshift ~1.8, and since this line is particularly strong, many objects can be seen at this redshift.  Conversely, there are large wavelength ranges with very few spectral lines, so when these regions are shifted into the observation range, no new objects are discovered.  Thus, what looks like a statistical excess (or deficit) is simply a selection effect due to a new spectral line moving into (or out of) the range of observed wavelengths.  Several different known spectral lines crossing into or out of the observation range can account for most (if not all) of the putative quantizations.

Setterfield:  In this paragraph, the statements are only true at high redshifts, around about 1.0.  What Trimble is doing here is confusing quantisation and periodicity, as mentioned above.  The periodicity in quasar redshifts, to which she is referring, only shows itself at great distances, whereas the quantisation has been established in objects closer to us.  The two are different.

 

Comment continued: Third, subsequent (and much more extensive) redshift surveys such as CNOC and 2dF have failed to yield statistically significant quantizations.  I know of no physical phenomenon that appears only in the data of those who are trying to prove the reality of that phenomenon, and no others. Frankly, the lack of corroboration by new and better observations by itself convinces me that there is likely nothing to the redshift quantization "controversy".

Setterfield: The new and extensive redshift surveys are of distant objects and redshift figures do not attain the accuracy needed to reveal the quantisation Tifft has picked up.  What is being discussed by Trimble is again large-scale periodicities suggested by Duari et al, and Hewitt and Burbidge, not the quantisation of Tifft, Cocke, Guthrie and Napier.

 

Question:  What about the lack of quantization in recent measurements?  I refer you to Astrophysics, abstract astro-ph/0208117

"We have used the publicly available data from the 2dF Galaxy Redshift Survey and the 2dF QSO Redshift Survey to test the hypothesis that there is a periodicity in the redshift distribution of quasi-stellar objects (QSOs) found projected close to foreground galaxies. These data provide by far the largest and most homogeneous sample for such a study, yielding 1647 QSO-galaxy pairs. There is no evidence for a periodicity at the predicted frequency in log(1+z), or at any other frequency."

Setterfield:  The primary redshift periodicities, or quantizations, which Tifft had picked up were not with Quasars, which were too far out to be able to show the small changes involved, but were with objects much closer in.  The work you have referenced here in no way negates Tifft's work.  What it does do is show that a proposal for a  periodicity in Quasars over large distances and large changes in redshift has been negated.  This is a different story.

 

Question:  There are well-defined relationships in physics between wave speed, frequency, and wavelength as well as distance, velocity, and time. These are the relationships used by astronomers in defining the redshift. I demonstrate in detail how they inter-relate if the wave speed changes and at 3.75 million c, the frequency change would correspond to a z of 3,750,000-1 = 3,749,999 (equation 43). This follows straightforward from those definitions. To get Setterfield's results, millions of wave-crests must have just vanished on their way to Earth. If so, how?

Setterfield:  In order to overcome this deficiency, the following technical notes give the foundation of that part of the proposition. As will be discovered upon reading this document, no wave-crests will disappear at all on their way to earth, and the energy of any given photon remains unchanged in transit. Not only does this follow from the usual definitions, but it is also gives results in accord with observation. This model maintains that wavelengths remain unchanged and frequency alters as Lightspeed drops. In order to see what is happening, consider a wave train of 100 crests associated with a photon. Let us assume that those 100 crests pass an observer at the point of emission in 1 second. Now assume that the speed of light drops to 1/10th of its initial speed in transit, so that at the point of final observation it takes 10 seconds for all 100 crests to pass. The number of crests has not altered; they are simply travelling more slowly. Since frequency is defined as the number of crests per second, both the frequency and light-speed have dropped to 1/10th of their original value, but there has been no change in the wavelength or the number of crests in the wave-train. The frequency is a direct result of light-speed, and nothing else happens to the wave-train.

Second, the standard redshift/distance relationship is not changed in this model. However, the paper demonstrates that there is a direct relationship between redshift and light-speed. Furthermore, astronomical distance and dynamical time are linearly related. As a consequence, the standard redshift/distance relationship is equivalent to the relationship between light-speed and time. The graph is the same; only the axes have been re-scaled.

As far as the redshift, z, is concerned, the most basic definition of all is that z equals the observed change in emitted wavelength divided by the laboratory standard for that wavelength. The model shows that there will be a specified change in emitted wavelength at each quantum jump that results in a progressive, quantised redshift as we look back to progressively more distant objects. This does not change the redshift/distance relationship or the definition of redshift. What this model does do is to predict the size of the redshift quantisation, and links that with a change in light-speed. The maths are all in place for that. (Sept. 21, 2001)

 

Other Explanations?

Question:  Couldn't the supposed quantization of the redshift measurements be an artifact of the clustering of galaxies?  Since we measure the light from the stars, and the stars come in groups, wouldn't this yield a quantized series of measurements? 

Setterfield:  On a large scale, the universe does appear to have structures interspersed with voids. In this case we are dealing with superclusters or megaclusters of galaxies separated by these voids you mention. However, this was not what was measured by Tifft's work. In his initial work he took the Coma cluster of galaxies and noted that redshift "bands", as he called them then, could be traced through the cluster. Later work revealed that redshift quantum jumps occurred within individual galaxies, which shows that this is not just a galaxy distribution effect. Further work by Tifft compared the redshifts of pairs of galaxies within various clusters.  The quantisation was a notable feature of these galaxy pairs and these results were commented on by New Scientist and other publications.  Thus, the concept of sheets of galaxies or super-clusters with voids in between does not provide an adequate expalnation for the quantisation effect, which is apparent on a much smaller scale.

 

Comment and references: As you are interested by alternatives to the doppler effect, I  indicate you an elementary light-matter interaction which redshifts  the light beams and uses the energy lost by this redshifts to heat  (amplify) the thermal radiation (2.7K, or attributed to dust). See:

http://arXiv.org/pdf/astro-ph/0110525
http://arXiv.org/pdf/astro-ph/0203099

Setterfield:  First let me express thanks for bringing this material to our attention.  It is an important matter. 

Second, it is the first time that I have seen a viable explanation for the dramatic redshift differences if Arp's proposal that quasars have been ejected from nearby galaxies is correct with the galaxy having a low redshift and its attendant quasar a high one. 

Third, it certainly provides another explanation for the effect that is interpreted as a minute change in the fine-structure constant. However, I still have doubts as to the validity of that change in the constant as it is so small and so much teasing out of information from the data is needed to accomplish it. Nevertheless, if it is true, then this does provide a viable explanation for what is observed, provided that it is only with quasars. 

Fourth, it must be emphasised that this effect being described seems to apply strictly to quasars rather than galaxies.  Since galaxies have been found by Hubble with redshifts out to a redshift z of about 5, this does not provide an explanation for the general redshift/distance relationship. Thus, while it supports Arp's contention that galaxies and their presumed ejected quasars are related, it does not explain the progressive galaxy redshifts out to high z values.  This leaves the door open for the standard interpretation of quasars as being the cores of distant galaxies to be correct. 

Fifth, the linkage of this effect with the 2.7 degree microwave background is interesting. On this basis, one would expect an increase of the background temperature with time. Instead, the reverse has been claimed, namely that the further out we look the higher is the background temperature with values measured (from memory) from 6 up to 14 degrees. If these results are verified, then it is an issue that the author of these linked papers may want to address.

 

Comment: I am concerned that the apparent fundamental quantization steps in red shift correspond to the maximum resolution of the equipment used (as far as I can tell) .  It appears that NASA has upgraded the Imaging Spectrograph (STIS) in the past year to digital (one CCD and a pair of MAMA's).  Either that or they have just changed some of the details so that they mean less to the reader (me).  In any case, the maximum resolution is still R ~ 100,000.  Therefore the problem  may still exist.

Setterfield:  Thanks for the important question.  Tifft answers this in his 1st December 1991 paper in the Astrophysical Journal pp.396-415.  Much of the work has been done on the 21 cm line and gave high levels of accuracy. Indeed, B. M. Lewis in "Observatory" for 1987, pp. 107 and 201, has claimed redshift accuracy in excess of 0.1 km/s. I am familiar with the way Lewis works, and he is a careful researcher. Since the quantum step is about 2.6 km/s, this accuracy is about 25 times greater than the quantity being measured.

 

The Redshift/Lightspeed Connection

Question: Please explain the connection between the redshift and the speed of light.

Setterfield:  The redshift and the speed of light are both 'children' of the same 'parent' -- the Zero Point Energy. The papers that I wrote in 2007 and 2008, Reviewing the Zero Point Energy , and Quantized Redshifts and the Zero Point Energy  establish this link.  Since the redshift and lightspeed are thus linked directly through the ZPE, and astronomical distance and time are linked, the graph of redshift against distance is the same as lightspeed against time. Furthermore, because the 1987 Report established that atomic clocks tick at a rate proportional to c, this standard redshift graph can then be used to convert atomic time to actual, or calendar time. The results are shown in laymen's terms in "Time, Life and Man."

 

Question:  When will the next quantized red-shift jump occur? Will there be any evidence of it that would affect everyday life?

Setterfield: The next quantum jump cannot be predicted. I suppose it is possible we might notice something, but that's a bit problematical. I can't think of anything right now. At one time, there was the suggestion that it might cause earthquakes, but that has been negated by more recent research.

 

Question: How would redshift be an automatic product of CDK?   In laymen's terms, could you tell me what the prediction from CDK would be regarding the cosmic microwave background radiation and the redshift?

 Setterfield:  The  redshift and variable light speed are separate manifestations of a basic cosmological effect.  One does not directly cause the other, but both are caused by the increase in the energy density of free space.  As far as light speed is concerned, an energy density increase in the vacuum means that the vacuum effectively becomes a 'thicker' medium for light to travel through, so its speed is retarded.  An increase in the energy density of free space also affects atomic behaviour.  However atomic behaviour usually occurs in discrete 'jumps.'  It therefore requires a buildup of the energy density of the vacuum to a certain threshold level before there will be a change in atomic phenomena.  Once this threshold has been crossed, more energy is available to the atom, which takes up a higher energy state for its orbits.  These higher energy states emit bluer light.  Therefore, as we look back in time (i.e. distance), we're looking back at times in the cosmos where the energy density was lower and so atomic orbits had less energy intrinsically and therefore had redder light.   Light speed changes smoothly with the vacuum properties, but atomic phenomena changes in jumps.   This is discussed in Behavior of the Zero Point Energy and Atomic Constants.  A recent paper also discusses this: Zero Point Energy and the Redshift.

Regarding the microwave background:  A high value for light speed has one effect on the microwave background.  Understand this background is not itself a manifestation of light speed decay, but is rather related to the energy input at the moment of creation.  A high value for light speed in the earliest moments of the cosmos would mean all radiation would be rapidly homogenized and smoothed out.  In other words, a high value for light speed would get rid of any 'lumps' in the microwave background.  A practical example may help here.  If you have a very large box, a kilometer long, and light speed was one meter per second, it would take one thousand seconds for radiation from one part of the box to get to the other part of the box.  It would be only after a number of internal reflections from within that box that any 'lumps' in the distribution of the radiation would be smoothed out.

This will obviously take a little time.  However, if the speed of light was 10,000 km/second, the smoothing out process would be much more rapid, and this is what a higher light speed has done, to give rise to an essentially smooth microwave background.

 

A Lumpy Microwave Background?

Question: We hear the microwave background radiation is lumpy according to latest data.  What about that?

Setterfield:  The first point to note about the Cosmic Microwave Background Radiation (CMBR) or CBR --cosmic background radiation) is that the distribution is essentially isotropic with very minor variations. The differences that we are looking at are of the order of about 1 part in about 50,000 - 100,000. This puts everything in perspective. Originally, the diagrams, or the photographs, of the variations in the CBR did not give concordant results.

The first satellite that was used for measuring the CMBR was the COBE satellite. The second satellite that was launched was the WMAP, which has given more accurate results. There has been a suggestion that the gaps in the CMBR that the WMAP has picked up may be identified with some of the existing voids between galaxy clusters, but this has yet to be confirmed. The launch of the European Planck satellite in 2009 promises to give much better data. The primary run of observations by this satellite ends on the sixteenth of March 2012.

However, in the meantime, there has been a very strong criticism of the WMAP data. This was announced June 11, 2010, by Sawangwit and Shanks at Durham University in the UK. Their criticism centered on the algorithm used by the computer which 'smoothed' out the ripples in the data. "They find that the smoothing is much larger than previously believed, suggesting that its measurement of the size of the CMB ripples is not as accurate as was thought. If true, this could mean that the ripples are significantly smaller, which could imply that dark matter and dark energy are not present after all." (Royal Astronomical Society News, June 14, 2010)

In other words, the Cosmic Microwave Background Radiation is probably not as 'lumpy' as first proposed.

 

Question regarding work by the Gentrys

Question:  I have read with interest the following article and wonder if you can comment on it, and how your understanding relates to what is said there. If you have already written on this subject, could you please point me in the right direction to read that, otherwise a brief comment would be appreciated. Thanks.
http://www.halos.com/reports/arxiv-1998-rosetta.htm[See excerpt below]

The Genuine Cosmic Rosetta
Robert V. Gentry and David W. Gentry

Abstract

Re-examination of general relativistic experimental results shows the universe is governed by Einstein’s static-space-time general relativity instead of Friedmann-Lemaitre expanding-space-time general relativity. The absence of expansion redshifts in a static-space-time universe suggests a reevaluation of the present cosmology is needed. 

For many decades the Friedmann-Lemaitre space-time expansion redshift hypothesis has been accepted as the Rosetta of modern cosmology. It is believed to unlock the mysteries of the cosmos just as the archaeological Rosetta unlocked the mysteries of ancient Egypt. But are expansion redshifts The Genuine Cosmic Rosetta?

Until now this has been the consensus because of their apparent, most impressive ability to uniquely explain how the twentieth century’s two great astronomical and astrophysical discoveries—meaning of course the Hubble redshift relation and the 2.7K Cosmic Blackbody Radiation (CBR)—can be accounted for within the framework of a hot big bang universe. But this consensus is not universal. For example, Burbidge and Arp continue to note that while most astronomers and astrophysicists accept the hot big bang and attribute extragalactic redshifts to expansion effects, they continue to ignore the minority view that certain observations, such as anomalous quasar redshifts, imply the need for a different redshift interpretation, and perhaps a different universe model as well.

Setterfield:  First of all, their comments about the significance of the redshift in relation to the Big Bang are certainly justified.  They go to some trouble to point out that if the Big Bang fails on this point, one of its key bases would be destroyed.  They point out that the evidence is that the universe – the fabric of space itself – is not expanding.  I agree with this.  One of the ways in which they do this is to point out the distinction between the Einstein’s and Friedmann’s mathematical descriptions of the universe.  These descriptions differ in that Einstein’s equations have the fabric of space as being static, whereas Friedmann’s equations have it expanding.  What the Gentrys have done is to show that the observational evidence supports the Einstein formulation rather than that of Friedmann.   If only Einstein's formulation is valid, then this means that the redshift is not due to the fabric of space itself expanding and stretching light waves in transit.  This only leaves motion of the galaxies themselves through the static fabric of space to account for the redshift in

The Gentrys, however, consider that the bulk of the redshift is due to a gravitational effect because the earth is near the center of the universe as they see it.  (They also consider that there is a component of the redshift that is due to the motion of galaxies.)  The problem that I have with this is that Misner, Thorne, and Wheeler in their massive book entitled Gravitation point out that gravitational redshifts greater than a redshift of 0.5 will result in rapid gravitational collapse.  It appears that the Gentrys have not yet addressed this issue.   

I want to emphasize that I totally agree with the Gentrys that an alternative explanation to the currently accepted Big Bang explanations is badly needed.

 

Accelerating Expansion?

Question: This past weekend the "Toronto Star" had an article in its' science section which states that recent evidence (a la big bang) suggests that the rate of universal expansion is accelerating...and that science has no idea why.

Wondered if you were aware of this and if this might not be evidence of the slowing of the speed of light...a accelerating of red shifts (?) in a very large lab (universal scale).  

Setterfield:  Basically, what has happened is a breakdown in the redshift/distance relationship. That relationship is usually given as the relativistic Doppler equation. I am old enough to remember when the straight-forward Doppler equation was in use. This worked fine until it was found that redshifts were appearing that were greater than z = 1. These results implied that the outer parts of the universe were expanding faster than light. In order to overcome that embarrassment, the relativistic Doppler equation was introduced and has been reasonably successful up to now. However, as a redshift of z = 1 is approached, a discrepancy is noted. This has been picked up relatively recently as a result of the Hubble Space Telescope observing distant supernovae of Type Ia which have a specific brightness that then allows an accurate distance estimate to be obtained. These observations indicated that these supernovae were fainter than expected for their redshift and so were further away than the formulae suggested. This could only happen if the cosmos was expanding increasingly faster as time went on. The equations describing the Big Bang scenario have a handful of different parameters which have been added or adjusted as different discoveries have been made. As a result, the model is beginning to look increasingly unwieldy.   

However, that is not all. The situation described above exists out to a redshift of z = 1.5 to 1.7. After that, the supernovae appear to be brighter than expected. This suggests that they are closer in than expected from the formulae. The majority of astronomers can only conclude that from the moment of the Big Bang out to a redshift of about 1.7 the universe was decelerating, and that the acceleration began after this. The action of the cosmological constant first postulated by Einstein has been invoked to account for these discrepant data.

Basically, what these results mean is that as we go out into space, there is a change in the redshift distance relationship from the expected formula. The true formula is such that it climbs slightly more slowly than the accepted formula with increasing distance out to about a redshift of z = 1. After that point, the actual curve starts climbing more steeply than the accepted formula. In other words, the relativistic Doppler equation is not the correct formula to describe the behaviour of the cosmos. This carries with it the implication that the idea of the redshift being due to expansion may not be correct either. This implication is currently being avoided by invoking the action of dark energy through the action of the cosmological constant.

Well, what is the correct formula to describe the behaviour of the cosmos? Incredibly, it appears that a mathematical approach that describes the origin of the Zero-Point Energy (ZPE) using Planck particle pairs (PPP) can reproduce the features required. Standard mathematical formulae describing the processes operating with PPP at the inception of the cosmos are used. This is work in progress at this moment, and we won’t have the full details until the end of 2003 or early 2004. But, in summary, there is a formula that is very similar to the relativistic Doppler formula that holds the potential to describe the behaviour of the redshift accurately, and that also describes the behaviour of the speed of light, Planck’s constant, the ticking of atomic clocks and atomic masses on the basis of the origin of the Zero Point Energy. Information on this is in several of my papers, including

The Redshift and the Zero Point Energy

Quantized Redshifts and the Zero Point Energy

Zero Point Energy and the Redshift      

I trust that this gives you an overall idea of what is happening.

 

The Redshift, ZPE, and Relativity

Questions:  1. What is the significance of the work of Dayton Miller in respect to the existence of the ether and how does it relate to the ZPF?  

2. In your document 'The Redshift and the Zero Point Energy' you write (bottom p 2 / top p 3)

"However, around a redshift of about 0.4, and post-1960, a departure from linearity began to be noted as galaxy ‘velocities’ became more relativistic. Consequently, by the mid-1960’s, the relativistic Doppler formula was applied and, even with the advent of the Hubble Space Telescope, it was found to be a reasonably accurate approximation for objects even at the frontiers of the universe. Thus equation (3) came to be re-written as z = {[1+(v/c)]/ √[1-(v2/c2)]} – 1 (5)".

On the internet I found no explanation for your remark here. Being not a specialist, I'm a bit puzzled. What does it mean to say "became more relativistic". Redshift had been connected to velocity by multiplying redshift with speed of light. Then Hubble stated that there is a linear relation between velocity and distance, known as 'Hubble's law'. In my understanding, age was also connected to velocity. As I understand from your document, there is a way of measuring velocity independent from the redshift, with which you can then correct Hubble's law, giving rise to the "relativistic Doppler formula". Can you solve my problem? 

Setterfield:  You ask about the significance of the Dayton Miller experiment, the ether drift he recorded, and how it relates to the ZPE. It is true that Miller, as well as Michelson and Morley produced relatively small but nonetheless positive results with their interferometers. Miller pointed out that the M-M experiment was in a heavily shielded environment. When Miller successively removed all shielding on his apparatus and transported it to a higher altitude, he obtained larger readings of the ether drift. On this basis, Miller concluded that the ether was entrained by the earth and so yielded lower results for any ‘drift’ the closer to the earth’s surface the experiment was conducted. Likewise, he concluded a similar effect for any shielding of the equipment. His final conclusion was that there was a motion of the solar system towards the southern hemisphere constellation of Dorado. In actual fact, the motion of the solar system has been accurately measured as being in a different direction and confirmed by studies using the microwave background radiation, which is also all-pervasive and provides an absolute reference frame. These data all indicate that we are moving at about 600 km/s towards the centre of the Virgo cluster of galaxies. Note that the very fact indicates that the microwave background provides us with an absolute reference frame and so negates some concepts basic to relativity. For a discussion of this latter point, see Martin Harwit, ‘Astrophysical Concepts’, second edition, p.178, Springer-Verlag, New York, 1988.   

One key point that you raised indirectly needs to be mentioned here. The presence of the ZPE is like an all-pervasive sea. Contrary to what Miller suggested for his ether, the ZPE is not inhibited in any way by the presence of matter. Irrespective of the density of matter, all the particles making up individual atoms are immersed in this ZPE ‘sea’. Instead of inhibiting its action, atoms are themselves sustained by the ZPE rather like pieces of debris carried along by the ocean.  

Now for your second question. Up until 1960 or thereabouts, the redshift appeared to be a linear relationship, that is to say redshift z = v/c where v was the so-called recession velocity and c the speed of light. This is what we have in equation (3) in the paper “The Redshift and the Zero Point Energy”. One point which was not made clear in the paper, but which should clarify things for you, is that up to 1960, we could not measure the spectral shifts of very distant galaxies because of the limitations of our equipment. As far out as we could measure, the redshift/distance graph was a straight line. On this basis it was expected that there would be no redshifts greater than z = 1, where the recessional velocity v = c. Sometime in the early 1960’s, as equipment improved, and we could see and measure spectral shifts to greater distances, it was noted that the data were deviating from a straight line by successively greater amounts at successively greater distances. The crunch point came when redshifts of z > 1 were measured. It was around that point in time, in the early 1960’s, that the straight forward and linear Doppler shift z = v/c that had been in use was discarded and the relativistic Doppler formula applied, namely z = [1 + (v/c) ]/{sqr [1-(v^2/c^2)]} – 1 as in equation (5).  

 

Redshift and Color

Question:  The question is: when we speak about 'redshift', what do we mean: is the whole spectrum of distant objects shifted to the red end, so that those objects appear to have a 'redder' color when we look at them, or: do we only talk about the shifting of the absorption lines to the red end of the spectrum, while the spectrum itself has the same colored outlook as near objects? I am confused about this, because the models I have seen all support the first option, while Barry also seems to speak about the second option, because in his opinion wave lengths do not change from the moment they were emitted to the moment they hit something (our eye f.i.). And wave length is what we perceive as color. So to me the consequence of Barry's viewpoint is, that indeed very distant objects are 'redder'. Can you help me out of this dilemma?

Setterfield:  The redshift of light from distant galaxies does indeed mean that the whole spectrum of colours is shifted towards the red. In other words, it is all light emitted that is proportionally redder the further away it is. The objects look physically redder. Astronomy often attributes this to a Doppler effect of galaxies moving away from us. All wavelengths are affected proportionally. The spectral lines of the elements are therefore also affected proportionally.  

On the model with changing lightspeed, these redder wavelengths are due to a lower strength of the Zero Point Energy (ZPE) which supports atomic structures across the cosmos. When the ZPE strength was lower, all atomic orbits had energies which were proportionally lower. Lower orbit energy for atoms means redder light emitted from those atoms. Because lightspeed is also affected by the ZPE, the higher the redshift the higher the value of lightspeed in direct proportion. 

Now comes something important. Up until now we have been talking about wavelengths of emitted light. As that light goes in transit across the universe and the speed of light drops, that emitted wavelength remains unchanged. In other words, when lightspeed changes simultaneously across the whole cosmos, it is not wavelength that changes, but frequency. The original wavelength at emission is locked in. This has recently been proven correct by Keith Wanser, a professor of physics, who was examining these ideas rather closely. He discovered that Maxwell’s equations predict that it will be the frequency of light that changes in any scenario with cosmological changes in lightspeed. Thus the wavelengths remain intact in transit. (In private correspondence, Keith Wanser has mentioned he hopes to publish this research shortly.) There was also experimental proof of this in the 1920’s to 1940’s when changes in lightspeed were being measured, but there were no measured changes in wavelengths of light in transit. This leads to the conclusion that it is frequency that varies as lightspeed drops in transit. Since frequency is simply the number of waves passing per second, as light of a given wavelength slows down, there are fewer waves of the same wavelength passing a given point per second. This is entirely logical.  

It is important to realize that the redshift in the wavelengths of light is the result of the lower ZPE on the atoms themselves. As the ZPE increases, the emitted light becomes more energetic because the energy of the atomic orbits is greater with a stronger ZPE. Since the blue end of the spectrum is the more energetic end, the light from these atoms will be bluer. Because atomic processes are quantized, or go in jumps, the light emitted from those atoms will become bluer in jumps as the strength of the ZPE increases. Thus, as we look back in time (which means when we look out into the depths of space) the light coming to us is redder (in jumps). Since our own local atoms are emitting light at wavelengths corresponding to the most recent (highest) value for the ZPE strength, our light will always be bluer than from distant objects.

 

The Lightspeed Curve and the Oscillation

Question:  In http://www.setterfield.org/AstronomicalDiscussion.htm#missingmass you wrote: 

"The recent work undergoing review indicates a very different curve which includes a slight oscillation. This oscillation means that there is very little variability in light speed out to the limits of our galaxy."

In the last sentence you indicate that the light speed has been about constant out to the limits of our galaxy. Does this mean that the light speed has been constant from when the light left the limits until it has reached us? Does this mean that your revised data and formulas do not support a young world interpretation?

Setterfield:  The point I was trying to make was that light got to us from the furthest reaches of our galaxy very quickly at the beginning.  For example, on the first day of creation, light from the center of our galaxy reached the earth in under three seconds.  Therefore light from the furthest reaches of our galaxy could have reached the earth in less than ten seconds.  This situation would have deteriorated rapidly as the buildup of the Zero Point Energy rapidly increased.  This occurred at the potential energy invested in the newly stretched universe very quickly was converted to kinetic energy, much like what we see when we stretch a rubber band and then release it.  Thus the speed of light would have rapidly dropped at first, as the build-up of the ZPE would have also caused the increase of virtual particles in any given volume of space at any given time.  This is what causes light to take longer to reach its destination.  With few or no virtual particles in the beginning, light would have reached its destination extraordinarily fast. 

This is how we have been able to see light from the most distant parts of the cosmos in less than ten thousand years.  The beginning part of its journey was much more rapid than the final part!  The actual curve shown by the dropping speed of light can be seen here.

This curve is the same as the redshift curve.  However the redshift curve starts some way out from our galaxy.  The actual light speed curve that pertained from the earth out to the edge of our galaxy is still being determined.  It has become apparent that an oscillation is involved which has become noticeable over this distance.  Part of that oscillation took the speed of light marginally lower than its present value from about 1500 B.C. to about 400 A.D.  As we go back in time, exactly at what point the speed of light started climbing from the oscillation trough I am not sure.  It has yet to be fully determined and I am working on this area at the moment.   

It was for this reason that I said in the quoted sentence “this oscillation means there is very little variability in light speed out to the limits of our galaxy.”  Keep in mind that our galaxy is an extremely tiny part of the entire universe and it is beyond our galaxy that the light speed data becomes much more indicative of change.  In the same way that we do not see a major redshift change until we are outside of our galaxy, we also will probably not see much more than the oscillatory change in the speed of light until we are outside of our galaxy.  There may be more evidence of change as we reach the outer edges of our galaxy.  I am still researching that. In any case, the comment about little variability must be seen in the total context where the initial value of c was about 1011 times its current speed. In this context, any graph displaying the behaviour of light over the lifetime of the cosmos will show very little change at this end of the curve. The redshift graph is almost a horizontal line at this point, and the lightspeed graph follows this, but with a conversion factor included.

 

Wikipedia and the Red Shift

Question:  I am a young-earth creationist. I have read with much fascination over the years your articles on variable c and more recently redshift quantization. 

According to this Wikipedia article,  the most recent sky surveys show no quantization.   Since the most recent article I have read from you regarding this issue is from 2002, I was wondering if in fact the redshift quantization has been refuted as the Wikipedia article claims. 

Since Wikipedia is, more often than not, a pooled-ignorance session that is unreliable when dealing with controversial subjects, I was hoping to get the straight story from you. If you have any more recent articles you could refer me to about this I would appreciate it.

Setterfield:  Thanks for the note with its question and the URL to Wikipedia. It is appreciated.

In response I would direct you to The ZPE and the Redshift section of my paper here on this website, Behavior of the Zero Point Energy and Atomic Constants.

I would ask you to note in particular that the Wikipedia article specifically states that it is the redshifts of QUASARS that are not quantized. The majority of the most recent work is dealing with objects (quasars) deep in space towards the frontiers of the cosmos at high redshifts. What was noticed initially with quasars was a large scale clustering effect - large numbers of quasars at preferred redshifts. However, this is different from what Tifft was noting within a given cluster of galaxies. There were consistent redshift jumps between individual members of the cluster, not preferred numbers of galaxies at a given redshift. This redshift jump cut right through individual galaxies and has nothing to do with the clustering which was initially picked up with Quasars. Indeed, the redshift quantum jump is small. This means it will be more difficult to detect at high redshifts because our instrumentation is less sensitive to small redshift changes then.

So in summary, the effects of clustering which were initially observed with quasars, and which may have been negated by recent work, is NOT the same as the Tifft redshift quantum jumps which are apparent between individual members of a cluster or group of galaxies. This issue may have been blurred by Arp's contention that quasars are from nearby galaxies so his model DOES expect the quasar redshifts to show these preferred values. THAT may well have been disproved, and, if it has, it also disproves Arp's contention that Quasars are nearby phenomena. If, however, the quasars really are distant, as the ZPE model accepts, then the quantization of redshifts between galaxies in groups still stands, as it is difficult to detect such small changes at high reshifts, and, indeed, this specific effect has NOT been looked for between neighboring quasars.