Posts for p4wn3r

1 2
5 6 7 34 35
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I know :) This time I just wanted to troll a little bit. I'm pretty someone will digest it or provide an elementary solution. :P
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Let $a/b$, denote a rational number. Define the p-adic valuation ordp(a/b) of a rational number for a prime p as follows: If e is the largest number such that pe divides a, and f is the largest number such that pf divides b, then ordp(a/b)=e-f. It should be clear that the definition above is invariant with respect to all equivalent fractions for a given rational number. Also, ordp satisfies the following property: ordp(xy)=ordp(x)+ordp(y). Perhaps more importantly, it satisfies the following (it can be proven writing the fraction sum and taking the lcm): ordp(x+y) >= min(ordp(x),ordp(y)) (*) In particular, if ordp(x) and ordp(y) are different, then the previous inequality becomes an equality. This is also simple, if boring, to prove. Now, denote by Hn the sum of all numbers of the form 1/k, for k from 1 to n, and let us look at ord2(Hn) Notice that we must have n between two consecutive powers of two, that is, 2m<=n<2m+1. Now, I claim that the term 1/2m is the term in Hn which has the smallest value of ord2 for all terms in the sum. Indeed, we have ord2(1/2m)=-m, and having a term with ord2(1/k)<m>=2m*c, with c>=2, which is impossible, since any k of this form would exceed n. Because of this, we can apply (*) to the n terms of the sum, paying attention to the fact that there's only one minimum term, with value -m. Because of this, the sum of this term with any other will generate a number whose p-adic valuation is also -m, and it follows that ord2(Hn)=-m. In particular, Hn is not an integer, because no integer can have negative p-adic order.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Mitjitsu wrote:
Will a perpetual motion machine ever be invented?
Perpetual motion machines come in many kinds. The first one is a machine that violates conservation of energy, moving faster and faster without any external source of energy. For something like this to be possible, it would require a drastic overhaul of physical theories. Every viable physical theory postulates that elemental interactions respect conservation of energy, and there's no evidence that it's violated. Machines of the second type, which can extract more useful energy from a system than what would be allowed by the second law of thermodynamics, thus reducing the entropy of the system, and is what is commonly called a Maxwell demon. These ones are widely considered impossible, too, but for a different reason than the first ones. The biggest difference is that, in the second kind, you can still write down some sensible looking equations that show it working. The biggest question is whether these equations can be implemented in a realistic system at all. Mathematically speaking, the second law of thermodynamics works only for ergodic systems, and there are systems which are non-ergodic, so we could circumvent the second law by engineering the system to be non-ergodic. Perhaps the most simple example I know is the so-called ellipsoid paradox, where you set up some mirrors to focus all light from a black-body, and then never reach thermal equilibrium. In the article I linked, the authors perform some numerical simulations, and show that, under the realistic assumption that the black body is not a point but has a defined size, the ray focusing does not work perfectly, and the system reaches thermal equilibrium after all. This is very general problem. Although you can come up with models that suggest the 2nd law can be violated, they are ultimately all approximations, and for all cases we know, using a better approximation makes the system ergodic again, and brings back the 2nd law, which is a rather interesting way to understand why the 2nd law works so well.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
r57shell wrote:
p4wn3r wrote:
r57shell wrote:
p4wn3r wrote:
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1. Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
The group is finite, so obviously it cannot get stuck forever.
No, why 1, a^1, a^2,a^3... can't correspond to 1,2,3,3,3,3,3,3,3,3,3? Each element is from Zn, but it doesn't go back to 1.
p4wn3r wrote:
It's quite general that the set of all powers of an element forms a group, which is abelian and cyclic. If you have doubts, you can check using the method I showed earlier.
I don't have doubts. I know that for any a I'll get back 1 because it is cyclic. I just don't remember proof. What you said doesn't look like a proof. "it is finite" - not an argument unless you use something in addition.
Well, the "something in addition" is in fact right next to the part you quoted, which you left out for some reason.
p4wn3r wrote:
Eventually you reach am=an, and since it's invertible you can cancel out and have am-n=1.
Do you have any doubts about this? It's implicitly assumed that n<m, and the cancellation is justified because you can multiply both sides by a-n
r57shell wrote:
Doesn't fact with powers go back to 1 work only about cyclic groups?
No, it works for all finite ones.
r57shell wrote:
When I said "I don't know trivial proof" I meant prove with much fewer facts used, and much "simpler" facts. For example, you could prove Fermat's little theorem using Euler theorem a^phi(n) = 1(mod n) - but it's ugly way.
I don't think so. If you want to prove Euler's theorem using elementary arguments, you first need to prove Fermat's little theorem, then work out the prime power case and use Chinese Remainder Theorem to work out the general case. Unless you can come up with a proof of Euler's theorem that does not invoke Fermat, it's circular to claim that Euler proves Fermat. Coincidentally, if you use groups, no distinction is needed. That's because phi(n) is, by definition, the size of the multiplicative group mod n. And the group theoretical proof works just fine, just replace p-1 with phi(n), which is the size of the group.
r57shell wrote:
You said ap-1 is trivial fact just from size. Now I wonder, did you mean it's trivial who is familiar with group theory and knows all that stuff about phi(n), Lagrange's theorem, Bezout's identity, orders of elements, orders of groups, subgroups, invertible elements, and so on?
All right, several things: (*) You don't need to know anything about phi(n) to show this particular statement, I don't know why you're bringing it up here. (*) Why is knowing Bezout's identity a problem? It's a result used even in elementary number theory, abstract algebra is not the thing that adds it into the problem. (*) Regarding Lagrange's theorem and orders of elements, it's in the first chapter of any book. I could hardly think of anyone who knows group theory and doesn't know how to apply Lagrange's theorem. If we go to Wikipedia's article about this proof, it says just that: "This proof requires the most basic elements of group theory." The thesis I am trying to present here (unsuccessfully, it looks like) is that it's more productive to study, and even define, number theory in an algebraic manner right away. It takes some time to familiarize with algebra, yes, but it's much better to study it than spend lots of time on elementary proofs that don't have a suitable generalization, and are much longer.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
r57shell wrote:
p4wn3r wrote:
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn.
I think you mean (ab)-1.
No, I don't. You do not seem very familiar with algebra.Proving that for a, b in a set implies that ab-1 is in the set is a shortcut to prove that something is a subgroup. The subgroup axioms are: (1) the identity is in H (2) for a given a in H, its inverse a-1 is also in H (3) for given a, b in H, their product ab is also in H If you prove that ab-1 is in H, then all the 3 points are satisfied. For (1), pick a=b. For(2), pick b=e. For (3), send b->b-1. Proving that ab-1 is in H already gives it the structure of a group. No further argument is needed.
r57shell wrote:
p4wn3r wrote:
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1. Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
The group is finite, so obviously it cannot get stuck forever. Eventually you reach am=an, and since it's invertible you can cancel out and have am-n=1. It's quite general that the set of all powers of an element forms a group, which is abelian and cyclic. If you have doubts, you can check using the method I showed earlier.
r57shell wrote:
Next you use Lagrange's theorem, which is long lecture about orders.
I wholly disagree with this. The proof of Lagrange's theorem boils down to the simple fact that you can define cosets given a subgroup, and they form an equivalence relation with sets of the same size for each class. Anyway, it's my feeling that you don't appreciate the simplicity of the proof because you're not very familiar with abstract algebra. This takes a long time to get used to, but I, for example, when I read somewhere that the size of a group is n, it immediately comes to me that if I take successive powers of an element a number of times that divides n, I'll get the identity. That's why I didn't even bother to provide a proof of this when I mentioned it.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
r57shell wrote:
Here pZ is actually subgroup of Z, and each element of Z/pZ is actually is a whole set of numbers divisible by p. It's easy to show that pZ with negative p is same as with positive p.
I usually read Zp as the "ring with size p and modular arithmetic".
r57shell wrote:
p4wn3r wrote:
The reason is that the theorem follows immediately from the fact that the multiplicative group of Z_p has size p-1. And that's it, no need to write down formulas. The statement is so good that it even gives you directions on how to prove it. (Why all nonzero elements of Z_p are invertible?)
I don't know proof from just size = p-1. Can you share it? I doubt it's trivial.
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn. A number is invertible in Zn if we can find an x such that ax = 1 (mod n), which means that, for some integer b, we must have ax+bn=1. According to Bezout's identity, this is possible iff gcd(a,n)=1. In particular, if we take n=p a prime, this will hold for 1,2,...,p-1. Therefore, the group of units in Zp has size p-1. Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1. But since it's a subgroup, by Lagrange's theorem, d divides the size, which is p-1. Therefore, we may write p-1=kd, and we have a^(p-1)=a^(kd)=1^k=1 (mod p), which is just Fermat's little theorem.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
FractalFusion wrote:
p4wn3r wrote:
For a more modern example, consider Fermat's little theorem from the point of view of someone who only knows factorization in natural numbers. It looks quite odd why prime numbers are magical, and allow ap-a to be a multiple of p. In contrast, by noticing that multiplication mod p defines a group, Fermat's little theorem is almost a triviality! This is the idea behind good definitions, they invite us to focus on the relevant properties to solve the problems.
Yes, but your example is about the concept of mod p, an absolute essentiality in elementary number theory. The thing about "negative primes", is that allowing them in elementary number theory would cause many well-known statements involving primes to be wrong. Not just the fundamental theorem of arithmetic, but also, for example:
Well, the point that you are presenting these theorems as a set of formulas that you can simply "plug" values of p is exactly the kind of interpretation that people want to discourage. For the record, Hungerford's book DOES include all the results you enumerated (unless, of course, you are including the 3 and 4-square theorems as "elementary", which have kinda ad hoc proofs), but they are abstracted away in the algebraic formalism, and I don't think he mentions them explicitly either. In any case, Fermat's little theorem is proved when he discusses the structure of Z_p, when p is a prime (I think we can all argue that negative sizes for fields don't make sense, so it should be clear that p is positive here). The reason is that the theorem follows immediately from the fact that the multiplicative group of Z_p has size p-1. And that's it, no need to write down formulas. The statement is so good that it even gives you directions on how to prove it. (Why all nonzero elements of Z_p are invertible?) The results about quadratic reciprocity, Gauss's Lemma, sums of squares, and Pythagorean triples, all of that is in the section about factorization of quadratic integers, because the only things necessary to understand whether a number can be expressed in the form x^2+ny^2 is: (1) factorization (in the generalized way), (2) the ring of quadratic integers Z[sqrt(-n)], and (3) whether a prime in Z splits or stays inert in Z[sqrt(-n)], and the answer is, spoiler, (4) quadratic reciprocity. And there you have it, Hungerford managed to prove, in two sections, things that elementary treatments of number theory do in a whole book. I think the situation is analogous to modern programming languages, where the standard library is richer, but the code you write is a one-liner. Similarly, in modern mathematics, the definitions are heavier, but their proofs are also much shorter.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
FractalFusion wrote:
p4wn3r wrote:
The modern definition of unique factorization only requires that the factorization be unique up to permutations of the factors and multiplication by units.[/url]
I'm not sure "modern" is the right word here, unless by "modern" you mean "applied to rings in general, rather than just the natural numbers".
It is. If the results of rings in general could be restricted to just abstract algebra, then perhaps we could consider it a generalization with no direct application to elementary number theory, but we don't need to go that far to see that there are immediate applications. Historically, this approach to unique factorization domains started when people realized all elementary results about primes of the form x2+ny2 could be derived by looking at the factorization of number systems of the form a+b*sqrt(n), when n = 3 mod 4, and (a+b*sqrt(n)/2 when n = 1 mod 4. Besides that, there's Euler's first proof that there are no integers satisfying x3+y3=z3, which looks at factorization properties of Q(sqrt(-3)). The crucial argument is to realize that, while unique factorization is not valid in the natural numbers version, it is if we disregard multiplication by units, and this is all that's needed for Euler's proof to work.
FractalFusion wrote:
I don't know if that is the reason Hungerford does that, but as for whether it makes theorems much simpler, if we're talking about elementary number theory, I doubt that.
It does make them much simpler, because it states, from the star, that elementary number theory is a direct consequence of properties of much general algebraic structures, there's no reason for it to occupy a special place. It's analogous to reading Diophantus. If one reads his problems, the equations are presented in a very. That's because for the Greeks, negative and irrational numbers did not exist. Because of this, Diophantus never writes an equation where the LHS and the RHS give negative numbers, because for him these equations were wrong. Similarly, when considering sqrt(2), for example, he always uses geometric constructions, because he cannot write sqrt(2) in the formulas directly, because irrational numbers do not exist for him. Today, we know these restrictions are silly. For a more modern example, consider Fermat's little theorem from the point of view of someone who only knows factorization in natural numbers. It looks quite odd why prime numbers are magical, and allow ap-a to be a multiple of p. In contrast, by noticing that multiplication mod p defines a group, Fermat's little theorem is almost a triviality! This is the idea behind good definitions, they invite us to focus on the relevant properties to solve the problems.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Warp wrote:
All that goes well above my head, but I get the feeling that you are trying to apply a wider field of mathematics into the narrower field of integer arithmetic.
Well, even if that's true, why not? During all the period I studied I had to learn things like history, biology, etc. and I am pretty sure that all the definitions were made by people that had a much greater scope than the content I had to study. That's how it works in 99% of subjects. People who specialize in something are responsible for classifying and defining things, and those that are interested in a small subset of it go along, I don't see any problem here. I also should say I don't feel comfortable with you implying that it's me who's trying to push things around. Just search for it in a specialized math forum that you'll see many others using negative primes, and as I have mentioned in this thread, even removing points from people who don't follow this convention. In all of these situations, I was not involved at all. So, keep it cool, OK?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
So, I was a bit off with my definition of a unit, a unit is actually an invertible element. My definition is for a root of unity, which implies it's a unit, but there are units which are not roots of unity, although not for the rings usually in number theory. The definition of unique factorization in terms of units is given here.
Warp wrote:
I have never heard of such a definition.
If you have any doubts, you can of course ask and I will provide references, sometimes I am a bit imprecise because I like going into many different areas of math, but it's not possible for me to assess how much of the literature you have read...
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
The modern definition of unique factorization only requires that the factorization be unique up to permutations of the factors and multiplication by units. A unit is an element that eventually becomes 1 if you multiply it by itself a finite number of times. In the case of integers, 1 and -1 are units. So, according to this definition, the two factorizations 15 = 3*5 = (-3)*(-5) are in fact unique because you can pair all prime factors so that they differ only by multiplication by a unit. The reason I think Hungerford, and other algebra authors, prefer negative primes, is because it makes theorems much simpler. One result is: If a, b, n and m are integers the image of the map an+bm is {0,+-d,+-2d,...}, where d=gcd(a,b) If we restrict ourselves to only positive integers, it's not true that the function maps to {0,d,2d,...}, we must allow n and m to be negative to get this. Changing addition to subtraction does not help either, because subtraction might not even be defined in only positive integers. Also, an important property of primes is that, if p divides ab, then p divides a or p divides b. This property is also satisfied by the negative primes. So, it seems to me that number theory is more naturally formulated in the full set of integers Z. In particular, the usual definition of only allowing positive numbers seems to induce people to think that the results of arithmetic are only valid if we can define a set of positive numbers, which is not true. The Gaussian integers, for example, are a subset of the complex numbers, which cannot be partitioned into positive and negative sets, and all results of gcd, primes, and unique factorization still apply.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
FractalFusion wrote:
I noticed the textbook you cited is in the Graduate Texts in Mathematics series. Outside ring theory, there is no need whatsoever to address negative numbers or zero when talking about prime numbers. It is only when studying ring theory that you need to consider this. Prime numbers can then be reformulated as prime elements of Z, in which case the primes are ±2, ±3, ±5, ±7, ±11, ...
Fixed the link. It's not THE Hungerford Algebra book, but the introductory one, which is not in Graduate Series. And the definition is for prime numbers, it's in the first chapter, before ring theory is even discussed. EDIT: I might as well post the question that raised this issue (I'll post the original Portuguese version, so you can put on Google Translate if you want):
Question wrote:
Seja a equação pn+144=q2, onde n e q são números inteiros positivos e p é um número primo. Determine os possíveis valores de n, p e q.
Which translates roughly as:
Question wrote:
Consider the equation pn+144=q2, where n and q are positive integers and p is a prime number. Determine the possible values of n, p and q.
It turns out that you can get more solutions by allowing p to be negative, and the statement seems to leave the possibility of p being negative purposely ambiguous. In any case, the expected answer involved the negative values of p. So, it does look like the examiners were trying to remove marks from people who thought primes can only be positive. If you ask me, I do think the most elegant definition of prime numbers, even if we work entirely in elementary number theory, is to allow them to be negative. At the same time, I think it's highly undiplomatic to put up questions like this.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Certainly you guys have a reason to believe a given definition is the best one. There is a reason we exclude 1 from being a prime, for example. In case there's some doubt whether the definition of only positive numbers being prime is unanimous, I can assure you it's not. For example, a very popular algebra textbook states (I changed some symbols because LaTeX is not supported here):
Hungerford wrote:
DEFINITION An integer p is said to be prime if p is not 0,1,-1 and the only divisors of p are 1,-1 and p, -p. EXAMPLE 3, -5, 7, -11, 13, -17 are prime, but 15 is not (because 15 has divisors other than 1,-1 and 15,-15, such as 3 and 5).
The context where this question was raised was that, in a given question the statement said "Let p be a prime integer" and then proceeded to ask for all solutions of a given equation. It turns out that allowing p to be negative gave more solutions, and the examiners did not give complete marks for people who provided the solutions only for positive p.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
OK, soft question now. Asking here because at another place I post about math there was some heated discussion about this and I want to know what people think. Are -2, -3, -5, -7, ..., prime numbers?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Very nice! Here's how I attacked this problem. Warning: I used some very advanced math. This problem is ancient, it's called the Congruent number problem. It goes all the way back to Diophantus, but Fibonacci was the first to study the problem in a way that would give the solution. The idea is to reduce it to elliptic curves. The key insight is to look at half the hypotenuse. If a, b, c denote rational sides of a right triangle of area n, then (c/2)^2+n and (c/2)^2-n are also rational squares. To see this, simply substitute c^2=a^2+b^2. That means we can find a rational triangle of area n whenever three rational squares are in an arithmetic progression of ratio n. To solve this, it's easier to work in projective space, where you have four coordinates (x,y,z,w) that collapse into (x/w,y/w,z/w). It turns out that the solutions to the problem correspond to the intersection of two quadrics in projective space, which reduces to an elliptic curve. Since I am lazy, I simply copied the parametrization from Wikipedia. If we define x=n(a+c)/b, y = 2n^2(a+c)/b^2 Then x and y satisfy y^2 = x^3-n^2x. To go from x and y to a, b, c we use: a=(x^2-n^2)/y, b=2nx/y, c=(x^2+n^2)/y Awesome, so our question reduces to: Does the curve y^2 = x^3 - 36x have infinitely many rational points? The laziest way to check that it does is to look at Cremona's database. How to understand that data? Well, it turns out that elliptic curves have a nice operation that you can use to generate more points in the curve from other ones. This gives the curve the structure of a group. LMFDB gives us the Mordell-Weil structure (the name is because of Mordell's theorem, that says the group is finitely generated), which is Z/2 x Z/2 x Z The Z part shows it's infinite. What if we did not have the database? Well, there are quite a few theorems we can use to narrow down its structure. There are height computations, Nagell-Lutz, reduction mod p, Mazur's theorem. I can elaborate if people want, but what's important to notice is that if we start with a rational point in the curve and keep adding it to itself, if the values are all different for a long period, then we can be sure there are infinitely many of them. LMFDB gives us the point (-3,9), which corresponds to our friend, the (3,4,5) triangle. From the previous discussion, if we keep adding this point to itself we'll generate all triangles of area 6. I implemented this in Python:
Language: Python

from fractions import Fraction as F n = 6 a=F(-n*n,1) maxn = 6 def extract(x,y): p = abs((x*x-n*n)/y) q = abs(2*n*x/y) r = abs((x*x+n*n)/y) print('({p},{q},{r})'.format(p=p,q=q,r=r)) xb=F(-3,1) yb=F(9,1) extract(xb,yb) s = (3*xb*xb+a)/(2*yb) x = s*s-2*xb y = s*(xb-x)-yb extract(x,y) for i in range(maxn): s = (y-yb)/(x-xb) nx = s*s-x-xb y = s*(xb-nx)-yb x = nx extract(x,y)
Looking at the digits, they grow extremely fast:
(3,4,5)
(7/10,120/7,1201/70)
(4653/851,3404/1551,7776485/1319901)
(1437599/168140,2017680/1437599,2094350404801/241717895860)
(3122541453/2129555051,8518220204/1040847151,18428872963986767525/2216541307731009701)
(43690772126393/20528380655970,246340567871640/43690772126393,5405257799550679424342410801/896900801363839325090016210)
(13932152355102290403/884619602260392601,3538478409041570404/4644050785034096801,64777297161660083702224674830494320965/4108218358333926731621213541698169401)
(4156118808548967941769601/1012483946084073924047720,12149807353008887088572640/4156118808548967941769601,21205995309366331267522543206350800799677728019201/4208003571673898812953630313884276610165569359720)
Now, Fractal, I am curious. Did you find the transformation by unwrapping the elliptic curve from the point doubling operation or managed to derive it in a more intuitive way?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
r57shell wrote:
p4wn3r wrote:
Prove that there are infinitely many right triangles with rational side lengths whose area equals 6.
Solved right now. I did use a little bit of programming though, to see if there is any solutions. As a proof that I solved it, I'll just leave following: (k*2)*(k*4)*(k*k*3) Had to use some 'basic advanced knowledge' xD
Could you share some details about the implementation of your code? Or, at least, the first few cases? The solution does use some "basic advanced" methods, but the problem is a bit well known, so it kinda balances out. I'm asking this mostly because I can't see the hint of a solution from what you wrote, and I wonder if it differs from mine.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Posting again for an important reason. I realized that, exactly 100 years from now (in my timezone, at least), the great Ramanujan left us :( In his honor, a (very tough) number theoretical problem. Prove that there are infinitely many right triangles with rational side lengths whose area equals 6. Also, press F to pay respects.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
The Gray code proof works pretty much the same way. For each vertex in the N-dimensional cube, we attach an N-bit binary number. In a cube that means: 000 to the point (0,0,0) 001 to the point (0,0,1) 010 to the point (0,1,0) ... and so on. In N-dimensions that works the same way. To go from a vertex to another walking along an edge means to simply flip a single bit. So, the question of the graph being Hamiltonian reduces to whether you can write a circular sequence of all N-bit numbers such that each consecutive pair differs by only one bit. The answer is the Gray code. For the cube: 000 001 011 010 110 111 101 100 If you look carefully, the N-bit Gray code is constructed just like you suggested. You append a 0 to the (N-1)-bit one, and then append 1 to its reverse.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Is it possible to place the vertices of an equilateral triangle on the points of a square grid?
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
FractalFusion wrote:
Every prime p>2 has a primitive root. There is a proof which uses field theory: Basically, if the max order of elements in (Z/pZ)* (multiplicative group) is some m<p-1, then every element satisfies x^m-1=0 over the field Z/pZ, and so x^m-1 has p-1 factors, contradicting that x^m-1 can only have at most m factors. It is nonconstructive (no algorithm is known in general that gives a primitive root).
I know this proof, it's been a while since I had a better look at these matters, but I remember it comes from the fact that the polynomial ring over the prime field is an Euclidean domain, right? In that way, you can simply take the gcd of the polynomial with x^p - x, that vanishes for everything. The GCD has to be at most the degree m of p(x), assuming it's less than p. However, this would be impossible if p(x) had more than m roots. Is there a way to prove this without the Euclidean property?
FractalFusion wrote:
The rearranging and relabelling trick is better described as conjugation. So for a permutation x, rearranging and relabelling turns it into zxz-1: left-multiplying by a permutation z corresponds to rearranging, and right-multiplying by z-1 corresponds to relabelling. Because parity of a permutation when multiplying is like even/odd numbers under addition, x and zxz-1 have the same parity. This is why rearranging and relabelling does not change parity. (So you don't have to wait for Mathologer to prove it next time.)
Interesting, I did not see that the rearranging and relabeling is conjugation. What I thought when he presented it was: Every permutation can be broken up into cycles, the sign of the permutation is found by multiplying (-1)^(length+1) for each cycle. Rearranging and relabeling does not change the length of the cycles, so it should not change the sign either. I thought this before looking at the blog post, and since it proves it by cycles directly, I liked it more. The conjugation argument is pretty neat, though. EDIT: It seems to me that it only makes sense to define the ring of polynomials mod p, where it is, in fact, Euclidean. If we define them over a composite number n we get the problem that the group of units mod n is not closed under addition, so in this case, the ring wouldn't even be an integral domain. That's why polynomial mod a composite number can have more roots than their degree, but not modulo a prime number.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I really liked the Mathologer video proving quadratic reciprocity: Link to video The argument is based on a proof by Zolotarev which recasts the problem on signs of permutations. Apparently, some guy found a nice geometrical interpretation of the permutations considered by Zolotarev. I think the video could be improved, though. Most of the statements he introduced about the sign of permutations look like magic, but are in fact very simple when you study them carefully. Also, he does not explain why the multiplicative group of the field of integers mod p should have a primitive root. Maybe it's just me, because I know most of the math concepts, but I also thought his final observation, where he uses a relabeling to prove that the sign of the permutation is the Legendre symbol very difficult to follow. The argument in the blog post, which uses permutation cycles, although more mathy, is more natural in my opinion.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
With the quarantine, I started taking a look at some math olympiad problems after a long time. Surprisingly, I actually managed to solve some of the easiest ones! One I liked a lot is Problem 1 of IMO 2017: For each integer a0 > 1, define the sequence of positive integers a0 , a1 , a2, ... by: an+1 = sqrt(an), if sqrt(an) is an integer, an+1 = an + 3, otherwise Determine all values of a0 for which there is a number A such that an = A for infinitely many values of n. Of course, IMO is infamous for very challenging problems, but this one is very doable. Keep playing around with the sequence until you notice a pattern and try to prove it. The solution is long, but I think it follows quite naturally. EDIT: Fixed typo as per the post below.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Warp wrote:
Or, perhaps, we only see a 3D slice of the 4D spacetime because it's physically impossible to "see" into the time axis. We cannot look at the past or the future and "see" what's there. We can only perceive and measure what's in the current time slice, period, and thus we only see a 3D slice of the entire thing. Being able to detect things outside of the current time slice might break some fundamental ontological physical property, or something.
This is, strictly speaking, not correct. The notion that the present is a 3-dimensional slice of spacetime is a Galilean/Newtonian one, not relativistic. The problem is, in relativity, simultaneity is a concept that depends on the observer. Two events that happen simultaneously in one frame are not simultaneous in another. Maybe this picture will help (the webpage where it's hosted is also good at explaining relativity (and also uses Cerulean and Vermilion, instead of Red and Blue to name the colors, probably a Pokemon fan!): Red represents the simultaneity surfaces (the 3d-slice) of one frame, while blue represents simultaneity surfaces of another frame moving at a constant velocity from it. Notice that the 3d-slice of blue is tilted with respect to red, so these surfaces are different. The fundamental physical property that we cannot "see" into the past is called causality. But notice that the notion of causality is very different in relativity, because simultaneity depends on the observer. The resolution of this apparent paradox is resolved by classifying the separation of spacetime events in three categories: * lightlike events, which are connected by a ray of light * timelike events, which are separated by a time interval in some frame * spacelike events, which are separated by a space interval in some frame It turns out that only timelike events can have a causality relation. Intuitively, that means that if a star A exploded at a point X and time T, and another one at a point X' and at a future time T', it's possible that the first explosion caused the second only if the separation between these events is timelike (which is just a fancy way of saying that they are close enough so that you can travel between them slower than the speed of light). Geometrically, that implies the following figure, which we call the light cone: Notice that "present" is not a 3d-slice, but a single point. Future is the forward light-cone, and past is the back one. Everything else is neither past or future, but completely independent of a current event. ---- To finish, I apologize if I was too technical with this post, but I think it's warranted for the following reasons: * When physicists describe something like extra dimensions and curvature in space, it's mostly a metaphoric construction. By this, I mean we don't actually believe that the world has a completely different scenario and a huge conspiracy happens to hide it. We are working with a well-defined mathematical entity which may or may not have analogies to the world as we usually perceive it, but the point is: we don't care, as long as it's consistent, it's possible to derive a physical theory. * When we learn to work with theories, a highly nontrivial part of the job is to separate what's only mathematical abstraction from measurable quantities. For example, in general relativity, there's a lot of talk about differential forms, tensors, etc., but what's actually measured are things like the precession of Mercury's orbit, the redshift induced by the expansion of the universe, gravitational lensing, etc. Most of the time you are working with two descriptions of the system which describe the same measurements, and it's a tough job to recognize it, one needs to go through the equations. * Because of my former points, it's not really possible for physics to answer is space really four-dimensional, because from the point of experiment that's a meaningless question. Physicists know this instinctively because they spent a lot of time deriving results, often with different formulations of the same theory, so they know that any experimental result that validates it automatically does the same for a huge amount of other formulations. If someone is studying relativity to find out whether dimensions are real, that's a question for philosophy, not physics. * Also related to the previous points, because in popular expositions of physics one has a limited scope to address these points, it's often desirable to be very metaphoric, but this line of reasoning has serious limitations. If you are content with metaphors, and never get to the true mathematical formulations, never separate what's pure math from what's actually observable, you have no hope of understanding why relativity is a consistent physical theories. As I emphasized before, metaphors are simply that: metaphors. I stress that all the spacetime considerations I posted here are discussed very well on the first chapters of any good book on relativity, and the math you need to understand to read them is not very hard in comparison to the rest of physics, and since there seems to be a considerable interest in relativity spanning several years in this thread, I think it's more productive to learn it appropriately from these sources and do the exercises than to keep relying on metaphors. My favorite explanation is the one given in the first chapters of this book. Have fun!
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Yup, usually one measures time in meters, as in "1 meter of time is the time that light takes to travel one meter". But measuring distance in terms of time works too. The other (perhaps stronger) motivation is: one can derive Galilean mechanics by requiring that the laws of physics are invariant with respect to translations and rotations in 3D space, and that all inertial frames see the same distance between two points. To derive relativity, the last condition is changed, you now require that the speed of light is constant in all inertial frames, which of course causes space and time to change between frames. This derivation is very simple and quite accessible for anyone who knows high school math. If anyone is interested, any good physics textbook on relativity does it on the first chapter. It turns out that if you incorporate time as a dimension, changes of frames are equivalent to a rotation in spacetime. So, a simpler definition of relativity is just: require that the laws of physics are invariant with respect to translations and rotations in 4D spacetime. So, in some sense, it's a simpler derivation.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I don't think it would make things weird. We must distinguish between a physical theory and the language where the theory is implemented. For example, you can write the exact same command line application in different programming languages. The code might look weird in one language and amazing in another. Whatever it looks like doesn't have anything to do with what the program does, but more about the features of the programming language. In mathematics, there's a concept called isomorphism, which basically means that two different mathematical objects can be treated as the same thing. Modern mathematics treats isomorphism as some form of ambiguity in the formalism, "good" mathematical languages should have as few isomorphism classes as possible, to avoid redundancies. In General Relativity, it's the same thing. It's not because, historically, the theory was derived from a geometrization of spacetime, that this is the only way to derive it. To give an example, there is a derivation due to Weinberg: you can derive the equations of general relativity by starting with a quantum theory in flat space (impose ad hoc that the speed of light should be the same in all inertial frames if you don't like time as a dimension), then require that gravity is a force transmitted by a spin-2 particle. After that, take quantum mechanics away and look at the classical limit. It turns out that the only possible equations which don't give any contradiction in this procedure are those of GR. There you have it, gravity completely derived in flat space. Incidentally, that's a point where I think popular expositions of physics do it wrong. It's usually said that the geometric interpretation is fundamental for understanding GR. I certainly think it's the easiest, but modern algebraic geometry tells us that these geometric definitions are ambiguous and there are lots of ways to arrive at the same math. In fact, that's the whole point of algebraic geometry!
1 2
5 6 7 34 35