The most famous sutra (teaching) in Mahayana Buddhism is known as the Heart sutra which contains - according to one commonly used translation - the following lines
"Form is not other than Void, Void is not other than Form".
I will here briefly attempt to explain the deep meaning of these lines and their potential great significance in mathematical terms.
Form relates to material phenomena, which in dualistic terms are largely understood as possessing a distinct independent existence.
Indeed this is what gives phenomena a customary rigid identity in analytic terms.
However the corresponding holistic perspective is to view phenomena not in terms of their distinct quantifiable identity, but rather with respect to the qualitative interdependence which ultimately connects all phenomena.
And such holistic appreciation relates directly to intuitive realisation that is the hallmark of advanced contemplative type awareness.
The culmination of such holistic awareness then leads to the realisation of the unity of all form (with respect to a common underlying spiritual nature).
However this ultimate appreciation of the qualitative interdependence of all reality requires corresponding detachment from the recognition of phenomena of form with respect to their separate phenomenal identity.
Thus the unity of all form coincides therefore with the emptiness (i.e. nothingness) of such form (in a separate phenomenal manner).
Of course in experiential terms this can only be approximated in a dynamic relative fashion.
Thus as the underlying spiritual unity of all creation becomes more evident, (distinct) phenomena of form become ever more transient as they arise and pass away from attention with increasing alacrity.
Eventually, the temporary dynamic nature of distinct phenomena will not even appear to be present in consciousness (though indirectly they must still be generated).
So at this stage, the unity of all form (as the actual realisation of the underlying spiritual nature of all phenomena) will approximate ever more closely to the (empty) void, as the pure potential basis for the subsequent emergence (in actual terms) of all such phenomena.
Now this has a direct relevance for mathematical appreciation.
The two most fundamental numbers are 0 and 1
From the customary analytic perspective, these two digits are given an absolutely separate independent identity.
Their great significance is demonstrated by the binary digital system on which the present IT revolution is based. So all information can be potentially encoded through the analytic use of the two digits 1 and 0!
However, what is not all clearly recognised is that all mathematical symbols and relationships, with a customary analytic interpretation (in quantitative terms), can equally be given a holistic interpretation (with immense potential implications from a qualitative perspective).
Therefore 1 and 0 have an important holistic meaning, which complements their accepted analytic interpretation.
And just as 1 and 0 are considered to be absolutely separate in analytic terms. they are considered as fully relative - and ultimately identical - with each other from the corresponding holistic perspective.
So, in holistic terms, 1 and 0 are seen - as it were - two sides of the same coin, which mutually imply each other.
Thus 1 (as the qualitative unity of all relatively distinct phenomena) implies 0 (as the corresponding nothingness with respect to a separate identity in quantitative terms) and vice versa.
Therefore in holistic mathematical terms, the lines quoted above from the Buddhist heart sutra, could be simply represented as,
1, i.e. as oneness is (ultimately) indistinguishable from 0 i.e. as nothingness, and 0 is (ultimately) indistinguishable from 1.
However the clear implications of such understanding is that we have to let go of the absolute identity of mathematical symbols in both analytic and holistic terms.
Thus both the analytic (Type 1) and holistic (Type 2) aspects of the number system can only be rightfully understood in a dynamic interactive manner. Thus notions of both quantitative independence and qualitative interdependence respectively, are now understood as complementary notions with a relative - rather than absolute - meaning.
Now we will briefly see how these analytic and holistic interpretations directly apply to the simplest of numbers.
For example when we understand "3" in the customary analytic manner, it can be defined as
3 = 1 + 1 + 1.
So the individual units are understood here in an independent homogeneous quantitative manner (that - literally - lack qualitative distinction). So we have no way of distinguishing the separate units from each other (which would require some unique qualitative feature).
In more complete terms, we can express this quantitative notion of "3" in Type 1 terms as 31.
Alternatively this can be expressed - using units only - as (1 + 1 + 1)1.
However, when we understand "3" in the unrecognised holistic manner, interpretation is subtly inverted.
So here "3" represents - not individual separate units of quantity - but rather the interdependence of all units in a collective manner.
Then in direct terms, just as analytic appreciation occurs in a rational, holistic appreciation occurs in a complementary intuitive manner!
This latter aspect of number is more fully expressed in Type 2 terms as 13.
Alternatively this can be expressed as 1(1 + 1+ 1).
So now both The Type 1 and Type 2 aspects have been expressed with reference to the number "1".
However when we appreciate these two aspects appropriately in a dynamic interactive manner (i.e. in Type 3 terms) it becomes apparent, like the turns at a crossroads, that the use here of 1 is inherently paradoxical.
So once again using our crossroads example in heading up a road (in a N direction) that a left turn at the crossroads can be unambiguously identified.
Likewise in a reverse manner, in heading down the road (in the opposite S direction) that the left turn at the crossroads can again be unambiguously identified.
However when, in a dynamic interactive manner, we attempt to embrace the approach to the crossroads simultaneously "seeing" from both N and S directions, then the identification of a left turn is rendered paradoxical. For what is left from one direction (say heading N) is right from the opposite direction (heading S) and vice versa.
It is quite similar in number terms. What is identified as 1 (from the Type 1 perspective) is in fact 0 (from the corresponding Type 2 perspective). Likewise what is identified as 0 (from the Type 1 perspective) is 1 (from the complementary Type 2 perspective).
Let us look at our example more closely to identify why this in fact is so.
Now again with respect to 31, I refer to 3 as the base and 1 as the corresponding dimensional number respectively.
Then, when we identify the base number 3 = ( 1 + 1 + 1) in quantitative terms, the corresponding dimensional number 1 (in Type 3 terms) should correctly be interpreted in a complementary qualitative manner.
In other words, whereas the base number 3 (= 1 + 1 + 1) refers to an actual number (in quantitative terms), the corresponding dimensional number 1 refers - in this relative context - to the potential for number existence (in a qualitative manner).
So whereas the actual number is finite (in quantitative terms), the number dimension is strictly speaking infinite in nature (potentially applying to any number).
Therefore the number dimension - having a qualitative meaning that provides the basis for subsequent relationships between numbers - is nothing (i.e. 0) in an actual quantitative manner.
So 1 as used in a qualitative context is strictly 0 (in corresponding quantitative terms).
Likewise with respect to 13, the meaning of 1 is subtly inverted, as now implying the base unit for all subsequent qualitative relationships (where interdependence is achieved).
Then in relative terms 3 (= 1 + 1 + 1) now carries a numerical significance in dimensional terms (as 3 related dimensions).
However what has a finite numerical meaning in a qualitative manner, strictly has no meaning in quantitative terms.
So again 1 (where 1 now numerically refers to qualitative identity) is strictly 0 in a corresponding quantitative dimensional manner.
In fact, we can indirectly show how this is so!
With respect to 13, we can indirectly express in a quantitative manner the circular nature of interdependence that attaches here to the 3 dimensional units by obtaining the 3 roots of 1.
So the 3 roots of 1 are 1, – .5 + .866i and – .5 – .866i respective, which geometrically can be expressed as 3 equidistant points on the unit circle in the complex plane.
And the collective sum of these roots = 0. Holistically this can be explained by the fact that these express (in an indirect quantitative manner) the qualitative notions of 1st, 2nd and 3rd (in the context of a group of 3 members).
Thus wheres cardinal identity relates to the quantitative nature of number (made up of independent individual units), in this context, ordinal identity relates by contrast to the corresponding qualitative nature of number (whereby the collective interdependent identity of unique units is expressed).
So if we are to properly understand the nature of number in dynamic interactive terms, we must recognise the complementary nature of both analytic and holistic aspects (where number is defined - relatively - in both linear and circular terms).
The implication of this is that the very nature of 1 and 0 now must likewise seamlessly switch as between each other. So what is 1 in a quantitative context is 0 in a corresponding qualitative manner; likewise what is 1 in a qualitative context is likewise 0 in a corresponding quantitative manner.
Therefore the ultimate nature of number approaches a state of pure ineffable mystery, where both linear and circular frames of reference are united in the pure marriage of both the quantitative and qualitative interpretation of mathematical symbols.
And here we have the holistic identity of 1 and 0 that ceaselessly change between each other.
Tuesday, October 25, 2016
Thursday, September 1, 2016
Further Investigation
In the last blog entry I suggested that a simple relationship governs the relationship of irreducible to reducible fractions (relating to the fractions in the Type 2 aspect of the number system that express the n roots of 1).
So again for example if n = 9, the n roots can be expressed in Type 2 terms as 11/9, 12/9, 13/9, 14/9, 15/9, 16/9, 17/9, 18/9 and 19/9.
So again for example if n = 9, the n roots can be expressed in Type 2 terms as 11/9, 12/9, 13/9, 14/9, 15/9, 16/9, 17/9, 18/9 and 19/9.
So the nine fractions in question here are 1/9, 2/9, 3/9, 4/9, 5/9, 6/9, 7/9, 8/9 and 9/9.
Of these 1/9, 2/9, 4/9, 5/9, 7/9 and 8/9 are irreducible (as numerator and denominator have no common factors).
By contrast however 3/9, 6/9 and 9/9 are reducible!
So the hypothesis I offered was that the average proportion of irreducible fractions with respect to the number system as a whole → 1/(1 + 2/π) = π/(π + 2).
Therefore the average proportion of reducible fractions for the number system as a whole → 1/(1 + π/2) = 2/(π + 2).
This would entail that on average about 61.1% of fractions would be irreducible and 38.9% reducible.
Expressed even more simple the average ratio of irreducible to reducible fractions → π/2 or alternatively the average ratio of reducible to irreducible fractions → 2/π.
I then went on to consider the proportion of irreducible factors that would apply to the roots of those numbers with non-repeating and repeating prime structures respectively.
So excluding 1, the numbers between 2 and 10 with non-repeating prime structures are 2, 3, 5, 6, 7 and 10, whereas 4, 8 and 9 have repeating prime structures (relating to its constituent prime factors).
And again for the number system as a whole, the average proportion of numbers with non-repeating prime structures number → 1/(1 + 2/π) = π/(π + 2). And the corresponding proportion of numbers with repeating prime structures → 1/(1 + π/2) = 2/(π + 2).
Now one would expect that a higher proportion of irreducible factors would apply with respect to those numbers with non-repeating prime structures.
From my preliminary estimates it seems that another simple pattern comes into focus.
It would appear that with respect to the numbers with non-repeating prime structures that the average proportion of irreducible factors → (π – 1)π = .68169....
This therefore would imply that the average proportion of irreducible factors for numbers with repeating prime structures →.5.
Expressed another way, this would thereby imply that for numbers with repeating prime structures the average proportion of both reducible and irreducible fractions would approach equality or alternatively that the ratio of reducible to the ratio of non-reducible factors (for numbers with repeating prime structures) → 1.
Monday, August 29, 2016
Another Interesting Relationship
In an
earlier blog entry, “Remarkable
Features of the Number System 1”, I drew attention to a very simple
relationship governing the ratio of numbers with non-repeating to numbers with
repeating prime structures respectively.
Once again
when each prime occurs but once in the unique factor composition of a number,
then it is termed as a number with a non-repeating structure.
So for
example 30 = 2 * 3 * 5 represents a number with a non-repeating prime structure
(as each factor occurs but once).
However
when one or more primes is repeated in the unique factor composition of that
number then it is a number with a repeating prime structure.
So in this
context, for example 28 = 2 * 2 * 7 represents a number with a repeating prime
structure (as 2 in this case occurs twice).
Basically,
I concluded following a fairly extensive range of empirical testing, that for
the number system as a whole, the average frequency of numbers with
non-repeating prime structures → 1/(1 + 2/π)
= π/(π + 2) and that the corresponding average frequency of numbers with
repeating prime structures → 1/(1 + π/2) = 2/(π
+ 2).
Therefore
the ratio of numbers with non-repeating to repeating prime structures (for the
number system as a whole) → π/2.
Alternatively,
we could say that the ratio of numbers with repeating to non-repeating prime
structures (for the number system as a whole) → 2/π.
Now of
course this represents a Type 1 view of number where the unique prime
factors of each number is expressed with respect to the default dimensional
power of 1.
So 3 for
example as a constituent factor, is more fully expressed = 31.
Recently my
attention turned to what in fact represents a complementary type problem.
We can
view the various roots of a number in Type 2 terms, where now in inverse terms,
the default base number of 1 is raised to dimensional powers that can vary.
So for
example the 3 roots of 1 would thereby be expressed as 11/3, 12/3
and 13/3 respectively. Thus concentrating on the dimensional
values (representing the Type 2 notion of number) the three values are 1/3, 2/3 and
3/3 respectively.
In more
general terms, the n roots of 1 - again concentrating on the dimensional values
- will range over all the natural numbers from 1/n to n/n.
Now clearly
where these reflect the prime roots of 1, when we exclude the final fractional value
(which always reduces to 1) all the other fractional values will be irreducible.
In other words it will not be possible to reduce any of these factors to a
smaller fraction (as no common factor can exist with respect to both numerator
and denominator).
However
where a composite number n is involved, the n roots of 1 will then yield
fractional values where some are reducible and others non-reducible.
For example
if we take the 12 roots of 1 (where of course 12 is composite) the 12
fractional values generated will be 1/12, 2/12, 3/12, 4/12, 5/12, 6/12, 7/12,
8/12, 9/12, 10/12, 11/12 and 12/12.
Now, we can easily see that 1/12, 5/12, 7/12 and 11/12 are
irreducible fractions.
However the
remaining fractions here i.e. 2/12, 3/12, 4/12, 6/12, 8/12, 9/12, 10/12 and 12/12 are
reducible (with both numerator and denominator containing common factors).
An
interesting question then arises with respect to the number system as a whole,
as to the average frequency of fractional values that are irreducible and
reducible respectively.
Remarkably,
this parallels closely the earlier relationship as to the average frequency of
numbers with non-repeating and repeating prime structures respectively.
So the
hypothesis that I now offer is that the average frequency of fractional values
that are irreducible → 1/(1 + 2/π) =
π/(π + 2); then the average frequency of fractional values that are reducible →
1/(1 + π/2) = 2/(π + 2).
So,
therefore the ratio of irreducible to reducible fractions → π/2.
Alternatively,
the ratio of reducible to irreducible fractions → 2/π.
Now, I
counted the irreducible fractions for roots of all numbers to 100 = 3054
(approx). relative to all fractions (5050). This works out at .60475 which is slightly
less than π/(π + 2) = .61101.
Now in
counting up irreducible fractions, the primes make the greatest contribution. So
if n is a prime n – 1 will be irreducible fractions.
The formula n (log n – 1) predicts 47 primes up to 200
with the actual occurrence = 46.
However it predicts 28 up to 100 (where the actual
occurrence = 25.
This would suggest that the actual frequency of primes is
less than what would be generally expected up to 100 which accounts in large
measure for the underestimate that I obtained.
However if one counts all fractions to 110 where the actual no. of primes
= 29 against a predicted value of 30, one now gets the much better estimate of
3726/6105 = .6103.
So there is little doubt to my mind that the formula I have
suggested is the correct one, bearing a direct complementary (Type 2)
relationship to the earlier (Type 1) that was mentioned in relation to the average
frequency of numbers with non-repeating primes.
In fact the intuitive realisation of this fact had already
suggested to me what the answer would be before I actually carried out any
numerical calculations to verify its nature.
Tuesday, June 14, 2016
More Interesting Relationships
Here are some interesting relationships, which I discovered some time ago in relation to
the Riemann Zeta Function (for positive integers > 1).
∑{ζ(s)
– 1} ~ 1 (for s = 2, 3, 4,…..)
For example from adding up values for s = 2 to 10, we obtain
.6449 + .20205 + .08232 + .03692 + .0173 + .00834 + .00407 +
.002008 + .000904
= .99812 (which is already close to 1)
Then ∑{ζ(s) – 1}
~ .75 (for even values of s i.e. s =
2, 4, 6, …)
So for even values of s from 2 to 10, we obtain
.6449 + .08232 + .0173 + .00407 + .000904
= .749494 (which again is very close to .75).
Also ∑{ζ(s) – 1} ~ .25 (for
odd values of s i.e. 3, 5, 7, ….)
So for odd values of s from 3 to 10, we obtain
.20205 + .03692 + .00834 + .002008
= .249318 (which for just 4 values computed is again close
to .25)
There are also interesting connections as between the
Riemann zeta function (for positive integer values of s and the Euler-
Mascheroni constant i.e. γ = .5772156649…
As is well
known for ζ(s) where s = 1 (i.e. the harmonic series) and
the summation of the series is taken over a finite set of values n,
ζ(1) = log n + γ
However γ in turn
is related to all ζ(s) - now summed without limit - for the other positive integer values
of s in the following manner!
γ = ζ(2)/2 – ζ(3)/3 + ζ(4)/4 – ζ(5)/5 + ……
So for s =
2 to 10, we obtain
1.644934/2 – 1.202056/3 + 1.082323/4 – 1.03692/5 + 1.0173/6 – 1.00834/7 + 1/00407/8 –
1.002008/9 + 1.000904/10
= .62474….
Now this
approximation is still not very accurate, but in this case the series the
series diverges very slowly towards the true value (oscillating alternating
above and below the true value).
A better
approximation however can be obtained as follows:
1 – γ
= {ζ(2)/2 – 1}/2 + { ζ(3)/3 – 1}/3 + {ζ(4)/4 – 1}/4 + {ζ(5)/5 – 1}/5 + ……
So again
summing for s = 2 to 10, we obtain
.644934/2 +
.202056/3 + 082323/4 + .03692/5 + .0173/6 + .00834/7 + .00407/8 + .002008/9 +
.000904/10
= .42268
(correct to 5 decimal places) which gives γ = .57732 which is already a
very good approximation to the true value i.e. .5772156649…
Also ζ(s)/ζ(s + 1) ~ 1, and
{ζ(s) – 1}/{ζ(s + 1) – 1} ~ 2, again
for sufficiently large t.
For example ζ(9)
= 1.002008 and ζ(10) = 1.000904
Therefore ζ(9)/
ζ(10) = 1.002008/1.000904 = 1.0011… (which is already close to 1)
Likewise ζ(9)
– 1 = .002008 and ζ(10) – 1 = .000904
Therefore {ζ(9)
– 1}/{ζ(10) – 1} = .002008/.000904 = 2.2212…
This is not yet very close to 2. However for larger t the
ratio will progressively fall towards 2!
In all cases i.e. for positive integers > 1, ζ(s) can be expressed as 1 + k
(where k is less than 1)
So for
example ζ(2) = 1.6449… = 1 + .6449…
We can then
define a “complementary” number as 1 – k
So in the case of ζ(2), 1 – k = 1
– .6449… = .3551
We can now define a new set of twin relationship as πs/ts1 = 1 + k and πs/ts2 = 1 – k respectively.
ts1 and
ts2 new are
the two denominators associated with the common numerator πs.
For example
when s = 2, π2/6 = 1 + .6449… and π2/27.79… = 1 – .6449… respectively.
So here, ts1
= 6 and ts2 = 27.79… respectively
And the
difference of ts2 and ts1 = 27.79 – 6 = 21.79…
When s grows sufficiently large {ts2
– ts1}/{t(s + 1)2 – t(s + 1)1} ~ 2/π
For example when s = 9, k = .002008; ts1 = 29749 (to nearest integer) and ts2 = 29869.
Therefore ts2
– ts1 = 120.
When s =
10, k = .000904; t(s + 1)1 =
93555 and t(s + 1)2 = 93733
Therefore t(s
+ 1)2 – t(s + 1)1 = 178
So {ts2 – ts1}/{t(s + 1)2 – t(s
+ 1)1} = 120/178 = .6741…
This
compares fairly well with 2/π = .6366..
And the approximation steadily improves for larger s.
Wednesday, June 8, 2016
Approximating the Non-Trivial Zeros (2)
Having approximated the first 10 of the non-trivial zeros, I decided to continue on an calculate the first 30.
Once again I am used the slightly modified formula i.e. t/2π(log t/2π – 1) + 1.
And as there are 29 non-trivial zeros up to 100, this means that we have thereby approximated all the non-trivial zeros for t to 100!
However in the original approximation of values, where I adjusted the first calculation for each zero downward (by half the deviation from the next value), a bias still remained in that the overall sum of the actual zeros tended to be consistently overshoot that of the corresponding approximations. Therefore in the attempt to eliminate this bias I decided to use a new adjustment factor (based on the deviations of the 1st set of approximations).
Therefore to more accurately approximate the nth zero, I decided to multiply the deviation as between the nth and (n + 1)st value by (1 – 2/π) and then subtract this from the original 1st approximation.
So below, I have provided a table showing the three different approximations, together with the acual values for the trivial zeros.
I have then highlighted the most recent approximations and actual values in bold type for easier comparison.
Once again, I consider it striking how the simple general formula provides such a convenient means for calculating, with stunning accuracy, not only the frequency of zeros up to any given t, but likewise a ready means for approximating the value for each one of the trivial zeros.
The difference as between the actual values for the zeros and their corresponding approximations is due to the local random nature of the zeros.
However this randomness is at the other extreme from the primes. In fact both the primes and trivial zeros complement each other in a dynamic interactive manner.
So the behaviour of individual primes is as independent as possible consistent with maintaining an overall collective interdependence with each other (through the natural numbers).
However the collective behaviour of the non-trivial zeros is as interdependent (i.e. ordered) as possible, consistent with each zero maintaining an individual local independence.
Therefore whereas the simple general formula for frequency of primes can only hope to predict with a strictly relative degree of accuracy, the corresponding formula for frequency of non-trivial zeros can predict in absolute terms with a remarkable level of accuracy.
Once again I am used the slightly modified formula i.e. t/2π(log t/2π – 1) + 1.
And as there are 29 non-trivial zeros up to 100, this means that we have thereby approximated all the non-trivial zeros for t to 100!
However in the original approximation of values, where I adjusted the first calculation for each zero downward (by half the deviation from the next value), a bias still remained in that the overall sum of the actual zeros tended to be consistently overshoot that of the corresponding approximations. Therefore in the attempt to eliminate this bias I decided to use a new adjustment factor (based on the deviations of the 1st set of approximations).
Therefore to more accurately approximate the nth zero, I decided to multiply the deviation as between the nth and (n + 1)st value by (1 – 2/π) and then subtract this from the original 1st approximation.
So below, I have provided a table showing the three different approximations, together with the acual values for the trivial zeros.
I have then highlighted the most recent approximations and actual values in bold type for easier comparison.
Riemann Zeros
|
Predicted Location
(1)
|
Deviation of Zeros
|
Predicted Location
(2)
|
Predicted Location
(3)
|
Actual Location
|
1st
|
17.08
|
14.34
|
15.09
|
14.13
|
|
2nd
|
22.56
|
5.48
|
20.27
|
20.90
|
21.02
|
3rd
|
27.14
|
4.58
|
25.09
|
25.65
|
25.01
|
4th
|
31.24
|
4.10
|
29.34
|
29.86
|
30.43
|
5th
|
35.04
|
3.80
|
33.28
|
33.76
|
32.94
|
6th
|
38.56
|
3.52
|
36.88
|
37.34
|
37.59
|
7th
|
41.92
|
3.36
|
40.28
|
40.73
|
40.92
|
8th
|
45.20
|
3.28
|
43.64
|
44.06
|
43.33
|
9th
|
48.33
|
3.13
|
46.82
|
47.23
|
48.01
|
10th
|
51.36
|
3.03
|
49.89
|
50.29
|
49.77
|
11th
|
54.31
|
2.95
|
52.87
|
53.26
|
52.97
|
12th
|
57.19
|
2.88
|
55.78
|
56.17
|
56.45
|
13th
|
60.00
|
2.81
|
58.62
|
59.18
|
59.35
|
14th
|
62.76
|
2.76
|
61.40
|
61.78
|
60.83
|
15th
|
65.47
|
2.71
|
64.14
|
64.51
|
65.11
|
16th
|
68.12
|
2.65
|
66.81
|
67.17
|
67.08
|
17th
|
70.74
|
2.62
|
69.45
|
69.80
|
69.55
|
18th
|
73.32
|
2.58
|
72.05
|
72.40
|
72.07
|
19th
|
75.86
|
2.54
|
74.61
|
74.95
|
75.70
|
20th
|
78.36
|
2.50
|
77.12
|
77.46
|
77.14
|
21st
|
80.83
|
2.47
|
79.60
|
79.94
|
79.34
|
22nd
|
83.28
|
2.45
|
82.07
|
82.40
|
82.91
|
23rd
|
85.70
|
2.42
|
84.50
|
84.83
|
84.74
|
24th
|
88.09
|
2.40
|
86.91
|
87.23
|
87.43
|
25th
|
90.46
|
2.37
|
89.29
|
89.61
|
88.81
|
26th
|
92.80
|
2.34
|
91.64
|
91.95
|
92.49
|
27th
|
95.12
|
2.32
|
93.96
|
94.28
|
94.65
|
28th
|
97.43
|
2.31
|
96.28
|
96.60
|
95.87
|
29th
|
99.72
|
2.29
|
98.59
|
98.90
|
98.83
|
30th
|
101.98
|
2.26
|
100.86
|
101.17
|
101.32
|
31st
|
104.22
|
2.24
|
The difference as between the actual values for the zeros and their corresponding approximations is due to the local random nature of the zeros.
However this randomness is at the other extreme from the primes. In fact both the primes and trivial zeros complement each other in a dynamic interactive manner.
So the behaviour of individual primes is as independent as possible consistent with maintaining an overall collective interdependence with each other (through the natural numbers).
However the collective behaviour of the non-trivial zeros is as interdependent (i.e. ordered) as possible, consistent with each zero maintaining an individual local independence.
Therefore whereas the simple general formula for frequency of primes can only hope to predict with a strictly relative degree of accuracy, the corresponding formula for frequency of non-trivial zeros can predict in absolute terms with a remarkable level of accuracy.
Subscribe to:
Posts (Atom)