Fixed the link. It's not THE Hungerford Algebra book, but the introductory one, which is not in Graduate Series.
And the definition is for prime numbers, it's in the first chapter, before ring theory is even discussed.
EDIT: I might as well post the question that raised this issue (I'll post the original Portuguese version, so you can put on Google Translate if you want):
Which translates roughly as:
It turns out that you can get more solutions by allowing p to be negative, and the statement seems to leave the possibility of p being negative purposely ambiguous. In any case, the expected answer involved the negative values of p. So, it does look like the examiners were trying to remove marks from people who thought primes can only be positive.
If you ask me, I do think the most elegant definition of prime numbers, even if we work entirely in elementary number theory, is to allow them to be negative. At the same time, I think it's highly undiplomatic to put up questions like this.
One problem with that is the same as why 1 isn't considered a prime: It contradicts the fundamental theorem of arithmetic, which states: "Every positive whole number can be written as a unique product of primes."
Allowing for negative primes would break the uniqueness requirement.
The modern definition of unique factorization only requires that the factorization be unique up to permutations of the factors and multiplication by units.
A unit is an element that eventually becomes 1 if you multiply it by itself a finite number of times. In the case of integers, 1 and -1 are units. So, according to this definition, the two factorizations
15 = 3*5 = (-3)*(-5)
are in fact unique because you can pair all prime factors so that they differ only by multiplication by a unit.
The reason I think Hungerford, and other algebra authors, prefer negative primes, is because it makes theorems much simpler. One result is:
If a, b, n and m are integers the image of the map an+bm is {0,+-d,+-2d,...}, where d=gcd(a,b)
If we restrict ourselves to only positive integers, it's not true that the function maps to {0,d,2d,...}, we must allow n and m to be negative to get this. Changing addition to subtraction does not help either, because subtraction might not even be defined in only positive integers.
Also, an important property of primes is that, if p divides ab, then p divides a or p divides b. This property is also satisfied by the negative primes.
So, it seems to me that number theory is more naturally formulated in the full set of integers Z. In particular, the usual definition of only allowing positive numbers seems to induce people to think that the results of arithmetic are only valid if we can define a set of positive numbers, which is not true. The Gaussian integers, for example, are a subset of the complex numbers, which cannot be partitioned into positive and negative sets, and all results of gcd, primes, and unique factorization still apply.
I have never heard of such a definition.
I have never heard of such a definition. A unit, or multiplicative identity element, is the only value that leaves the original value unchanged when multiplied by that element.
So, I was a bit off with my definition of a unit, a unit is actually an invertible element. My definition is for a root of unity, which implies it's a unit, but there are units which are not roots of unity, although not for the rings usually in number theory.
The definition of unique factorization in terms of units is given here.
If you have any doubts, you can of course ask and I will provide references, sometimes I am a bit imprecise because I like going into many different areas of math, but it's not possible for me to assess how much of the literature you have read...
Just a few thoughts: for composite n its also composite. trivial fact. for n = 3 it is composite (even though 3 is prime). I don't see connection from n being prime. But this number is result of 9999/99 or 1111/11, but it didn't help.Edit: other thoughts:
Suppose p is our prime number, and n is also prime (using facts above)(102n-1)/99 = p (mod n)assume 99 != 0 (mod n) (otherwise n is 3 or 11)so we can multiply both sides by 99. and there is multiplicative inverse
of 99. everything is fine.102n-1 = p*99 (mod n)using theorem ap = a (mod p) for each prime p and any a10n2 = 10^2 (mod n)100-1 = p*99(mod n)99 = p*99 (mod n)1 = p (mod n)so p can be prime if p-1 is divisible by n, if n != 3 or 11so, if somehow you can prove that p-1 is not divisible by n then we have a proof.Solution: but I don't like it, found it by checking n = 5, and n = 7.
for odd number n = 2k+1, it can be represented as 9091*11111, just checked for factorization for 5so formula is a = (10n-1-1)/99*90+1 = (10n-10)/11+1, b = (10n-1)/9a*b = ((10n-10)/11+1)*(10n-1)/9 = (10n+1)*(10n-1)/99 = (102n-1)/99probably normal idea was (102n-1)/99 = (10n-1)*(10n+1)/99 = p, then notice that 10n-1 is divisible by 99 if n is even -> solution for even ones. otherwise it's divisible by 9, so we have to check that 10n+1 is divisible by 11 when n is odd. it is easy. 10^n = -1^n (mod 11)
All that goes well above my head, but I get the feeling that you are trying to apply a wider field of mathematics into the narrower field of integer arithmetic.
Well, even if that's true, why not? During all the period I studied I had to learn things like history, biology, etc. and I am pretty sure that all the definitions were made by people that had a much greater scope than the content I had to study. That's how it works in 99% of subjects. People who specialize in something are responsible for classifying and defining things, and those that are interested in a small subset of it go along, I don't see any problem here.
I also should say I don't feel comfortable with you implying that it's me who's trying to push things around. Just search for it in a specialized math forum that you'll see many others using negative primes, and as I have mentioned in this thread, even removing points from people who don't follow this convention. In all of these situations, I was not involved at all. So, keep it cool, OK?
1 + 100 + 1002 + 1003 + 1004 + ... + 100n cannot be prime for n greater than 1. In fact there is a simple way to factor the number at least partially.
The reason is that the number is of the form 1+x2+x4...+x2n (where x=10). This can be rewritten as:
1+x2+x4...+x2n
=(x2n+2-1)/(x2-1)
=(xn+1-1)(xn+1+1)/(x-1)(x+1)
If n is odd, then (x-1)(x+1) divides (xn+1-1) and we can factor it as
(xn-1+xn-3+...+x2+1)(xn+1+1).
If n is even, then x-1 divides xn+1-1 and x+1 divides xn+1+1, and we can factor it as
(xn+xn-1+...+x+1)(xn-xn-1+...-x+1)
----
Negative numbers (and 0) are not relevant to the fundamental theorem of arithmetic. The fundamental theorem of arithmetic is a statement about natural numbers. (Technically, 1 is also not relevant either.)
So it makes no sense to talk about negative numbers being prime or not prime, in this context.
p4wn3r wrote:
The modern definition of unique factorization only requires that the factorization be unique up to permutations of the factors and multiplication by units.[/url]
I'm not sure "modern" is the right word here, unless by "modern" you mean "applied to rings in general, rather than just the natural numbers".
Warp wrote:
I have never heard of such a definition. A unit, or multiplicative identity element, is the only value that leaves the original value unchanged when multiplied by that element.
Technically, the multiplicative identity element of a ring is called "unity", not "unit". A unit is any number that divides the unity.
E.g. In the ring of integers, 1 is the unity, and -1 and 1 are units (because they are both divisors of 1).
p4wn3r wrote:
The reason I think Hungerford, and other algebra authors, prefer negative primes, is because it makes theorems much simpler.
I don't know if that is the reason Hungerford does that, but as for whether it makes theorems much simpler, if we're talking about elementary number theory, I doubt that.
The modern definition of unique factorization only requires that the factorization be unique up to permutations of the factors and multiplication by units.[/url]
I'm not sure "modern" is the right word here, unless by "modern" you mean "applied to rings in general, rather than just the natural numbers".
It is. If the results of rings in general could be restricted to just abstract algebra, then perhaps we could consider it a generalization with no direct application to elementary number theory, but we don't need to go that far to see that there are immediate applications.
Historically, this approach to unique factorization domains started when people realized all elementary results about primes of the form x2+ny2 could be derived by looking at the factorization of number systems of the form a+b*sqrt(n), when n = 3 mod 4, and (a+b*sqrt(n)/2 when n = 1 mod 4.
Besides that, there's Euler's first proof that there are no integers satisfying x3+y3=z3, which looks at factorization properties of Q(sqrt(-3)). The crucial argument is to realize that, while unique factorization is not valid in the natural numbers version, it is if we disregard multiplication by units, and this is all that's needed for Euler's proof to work.
FractalFusion wrote:
I don't know if that is the reason Hungerford does that, but as for whether it makes theorems much simpler, if we're talking about elementary number theory, I doubt that.
It does make them much simpler, because it states, from the star, that elementary number theory is a direct consequence of properties of much general algebraic structures, there's no reason for it to occupy a special place.
It's analogous to reading Diophantus. If one reads his problems, the equations are presented in a very. That's because for the Greeks, negative and irrational numbers did not exist. Because of this, Diophantus never writes an equation where the LHS and the RHS give negative numbers, because for him these equations were wrong. Similarly, when considering sqrt(2), for example, he always uses geometric constructions, because he cannot write sqrt(2) in the formulas directly, because irrational numbers do not exist for him.
Today, we know these restrictions are silly. For a more modern example, consider Fermat's little theorem from the point of view of someone who only knows factorization in natural numbers. It looks quite odd why prime numbers are magical, and allow ap-a to be a multiple of p. In contrast, by noticing that multiplication mod p defines a group, Fermat's little theorem is almost a triviality! This is the idea behind good definitions, they invite us to focus on the relevant properties to solve the problems.
For a more modern example, consider Fermat's little theorem from the point of view of someone who only knows factorization in natural numbers. It looks quite odd why prime numbers are magical, and allow ap-a to be a multiple of p. In contrast, by noticing that multiplication mod p defines a group, Fermat's little theorem is almost a triviality! This is the idea behind good definitions, they invite us to focus on the relevant properties to solve the problems.
Yes, but your example is about the concept of mod p, an absolute essentiality in elementary number theory.
The thing about "negative primes", is that allowing them in elementary number theory would cause many well-known statements involving primes to be wrong. Not just the fundamental theorem of arithmetic, but also, for example:
* Fermat's little theorem as stated, a^p=a (mod p) fails if p is allowed to be a negative prime. Even the related statement a^(p-1)=1 (mod p) when a is coprime to p, is false for negative prime p.
* The statement that for an odd prime p, -1 is a quadratic residue mod p iff p is of the form 4k+1, is false if p is allowed to be a negative prime (for example, p=-3). Related: The statement that primes of the form 4k+1 are a sum of two squares is false if p is allowed to be a negative prime.
* Gauss's Lemma wouldn't make sense if p is allowed to be a negative prime.
I don't think having the concept of "negative primes" and fitting these theorems around them would help us understand these theorems in any way.
Also, looking at the topics in Hungerford's textbook, elementary number theory clearly is not the end goal here. The only thing it has in common with elementary number theory is modular arithmetic and the Chinese Remainder Theorem, neither of which are affected by allowing primes to be negative. An actual elementary number theory textbook, one which goes over things like Fermat's little theorem, primality testing, sum of squares, Pythagorean triples, etc., would most likely define primes the same way most people have done it for centuries: natural numbers only.
For a more modern example, consider Fermat's little theorem from the point of view of someone who only knows factorization in natural numbers. It looks quite odd why prime numbers are magical, and allow ap-a to be a multiple of p. In contrast, by noticing that multiplication mod p defines a group, Fermat's little theorem is almost a triviality! This is the idea behind good definitions, they invite us to focus on the relevant properties to solve the problems.
Yes, but your example is about the concept of mod p, an absolute essentiality in elementary number theory.
The thing about "negative primes", is that allowing them in elementary number theory would cause many well-known statements involving primes to be wrong. Not just the fundamental theorem of arithmetic, but also, for example:
Well, the point that you are presenting these theorems as a set of formulas that you can simply "plug" values of p is exactly the kind of interpretation that people want to discourage.
For the record, Hungerford's book DOES include all the results you enumerated (unless, of course, you are including the 3 and 4-square theorems as "elementary", which have kinda ad hoc proofs), but they are abstracted away in the algebraic formalism, and I don't think he mentions them explicitly either.
In any case, Fermat's little theorem is proved when he discusses the structure of Z_p, when p is a prime (I think we can all argue that negative sizes for fields don't make sense, so it should be clear that p is positive here). The reason is that the theorem follows immediately from the fact that the multiplicative group of Z_p has size p-1. And that's it, no need to write down formulas. The statement is so good that it even gives you directions on how to prove it. (Why all nonzero elements of Z_p are invertible?)
The results about quadratic reciprocity, Gauss's Lemma, sums of squares, and Pythagorean triples, all of that is in the section about factorization of quadratic integers, because the only things necessary to understand whether a number can be expressed in the form x^2+ny^2 is: (1) factorization (in the generalized way), (2) the ring of quadratic integers Z[sqrt(-n)], and (3) whether a prime in Z splits or stays inert in Z[sqrt(-n)], and the answer is, spoiler, (4) quadratic reciprocity.
And there you have it, Hungerford managed to prove, in two sections, things that elementary treatments of number theory do in a whole book. I think the situation is analogous to modern programming languages, where the standard library is richer, but the code you write is a one-liner. Similarly, in modern mathematics, the definitions are heavier, but their proofs are also much shorter.
Had to restrain myself to not involve into that but now I can't :D
p4wn3r wrote:
In any case, Fermat's little theorem is proved when he discusses the structure of Z_p, when p is a prime (I think we can all argue that negative sizes for fields don't make sense, so it should be clear that p is positive here).
I disagree with that Z_p with negative don't make sense. Z_p most of time is defined as Z/pZ - this is so called 'factor group'. Here pZ is actually subgroup of Z, and each element of Z/pZ is actually is a whole set of numbers divisible by p. It's easy to show that pZ with negative p is same as with positive p. Example, p = 5, then pZ = 5*{...-3,-2,1,0,1,2,3...} = {...-15,-10,-5,0,5,10,15...}. when p = -5, then pZ = = -5*{...-3,-2,1,0,1,2,3...} = {...15,10,5,0,-5,-10,-15...} So it's just in reversed order, but order doesn't matter. so Z/pZ = Z/-pZ. Therefore, if Z_p is defined as Z/pZ, then negative p is possible. Personally I never heard of different way of difinition.
p4wn3r wrote:
The reason is that the theorem follows immediately from the fact that the multiplicative group of Z_p has size p-1. And that's it, no need to write down formulas. The statement is so good that it even gives you directions on how to prove it. (Why all nonzero elements of Z_p are invertible?)
I don't know proof from just size = p-1. Can you share it? I doubt it's trivial.
Here pZ is actually subgroup of Z, and each element of Z/pZ is actually is a whole set of numbers divisible by p. It's easy to show that pZ with negative p is same as with positive p.
I usually read Zp as the "ring with size p and modular arithmetic".
r57shell wrote:
p4wn3r wrote:
The reason is that the theorem follows immediately from the fact that the multiplicative group of Z_p has size p-1. And that's it, no need to write down formulas. The statement is so good that it even gives you directions on how to prove it. (Why all nonzero elements of Z_p are invertible?)
I don't know proof from just size = p-1. Can you share it? I doubt it's trivial.
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn.
A number is invertible in Zn if we can find an x such that ax = 1 (mod n), which means that, for some integer b, we must have ax+bn=1. According to Bezout's identity, this is possible iff gcd(a,n)=1. In particular, if we take n=p a prime, this will hold for 1,2,...,p-1. Therefore, the group of units in Zp has size p-1.
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1. But since it's a subgroup, by Lagrange's theorem, d divides the size, which is p-1. Therefore, we may write p-1=kd, and we have a^(p-1)=a^(kd)=1^k=1 (mod p), which is just Fermat's little theorem.
I usually read Zp as the "ring with size p and modular arithmetic".
It doesn't matter how you read. If you plug in p = -5 into usual definition Zp = Z/pZ you'll have set: Zp = {{-5k+0 : k from Z}, {-5k+1 : k from Z}, {-5k+2 : k from Z}, {-5k+3 : k from Z}, {-5k+4 : k from Z} }
Which is identical to Z5
I had mistake in my previous reply. When I said about all divisible by p, for some reason I was talking about pZ, not about Z/pZ.
Also, if you think about remainders of division by (-5), they still from 0 to 4 in math notation.
p4wn3r wrote:
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn.
I think you mean (ab)-1. I agree, it's obvious: (ab)-1 = b-1a-1, abb-1a-1 = a*1*a-1 = 1 from associativity. b-1a-1 is in Zn because each is operand is in group.
Existance of inverse is identical to ax+bn=1 - is obvious, but Bezout's identity is not trivial. Yes, it works for 1...p-1, but I asked from the beginning how from size p-1 prove, so I assume that we know that size is p-1.
p4wn3r wrote:
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1.
Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
Next you use Lagrange's theorem, which is long lecture about orders.
Well... It doesn't look trivial. Most trivial proof that I know is from Mathologer video with necklace. There is other way to prove it using multiplication tables + diagrams but I don't like it.
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn.
I think you mean (ab)-1.
No, I don't. You do not seem very familiar with algebra.Proving that for a, b in a set implies that ab-1 is in the set is a shortcut to prove that something is a subgroup. The subgroup axioms are:
(1) the identity is in H
(2) for a given a in H, its inverse a-1 is also in H
(3) for given a, b in H, their product ab is also in H
If you prove that ab-1 is in H, then all the 3 points are satisfied. For (1), pick a=b. For(2), pick b=e. For (3), send b->b-1. Proving that ab-1 is in H already gives it the structure of a group. No further argument is needed.
r57shell wrote:
p4wn3r wrote:
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1.
Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
The group is finite, so obviously it cannot get stuck forever. Eventually you reach am=an, and since it's invertible you can cancel out and have am-n=1.
It's quite general that the set of all powers of an element forms a group, which is abelian and cyclic. If you have doubts, you can check using the method I showed earlier.
r57shell wrote:
Next you use Lagrange's theorem, which is long lecture about orders.
I wholly disagree with this. The proof of Lagrange's theorem boils down to the simple fact that you can define cosets given a subgroup, and they form an equivalence relation with sets of the same size for each class.
Anyway, it's my feeling that you don't appreciate the simplicity of the proof because you're not very familiar with abstract algebra. This takes a long time to get used to, but I, for example, when I read somewhere that the size of a group is n, it immediately comes to me that if I take successive powers of an element a number of times that divides n, I'll get the identity. That's why I didn't even bother to provide a proof of this when I mentioned it.
Notice that invertible elements in Zn, for n any number, form a group, because for any two a and b in Zn, ab-1 is also in Zn.
I think you mean (ab)-1.
No, I don't. You do not seem very familiar with algebra.Proving that for a, b in a set implies that ab-1 is in the set is a shortcut to prove that something is a subgroup.
Yes, I'm not familiar with shortcuts in proofs. But I'm familiar with many facts, like ap-1 = 1 (mod p) and aphi(n) = 1 (mod n), and that order divides phi(n) and so on. Also, that Zp is cyclic and exists generator. I don't dig into proofs, I heard them, but I don't remember them. It's kind of disclaimer.
Now, regarding to shortcut, and proficiency: You said ap-1 is trivial fact just from size. Now I wonder, did you mean it's trivial who is familiar with group theory and knows all that stuff about phi(n), Lagrange's theorem, Bezout's identity, orders of elements, orders of groups, subgroups, invertible elements, and so on? If yes, then I agree, it's trivial. And also trivial proof that Fermat's theorem for n = 3.
When I said "I don't know trivial proof" I meant prove with much fewer facts used, and much "simpler" facts. For example, you could prove Fermat's little theorem using Euler theorem a^phi(n) = 1(mod n) - but it's ugly way.
Now, regarding the shortcut. This fact is about any subgroup of any group. Now I want to prove it. First, I want to show that if H is subgroup then ab-1 is in G. Because b is in H then b-1 is in H, so ab-1 is also in H. This part is obvious.
Next, I need to prove opposite: if ab-1 is in H then e, ab, a-1, b-1 is in H. Alright, if ab-1 is in H, then if b = a we get aa-1 is in H (wasn't obvious until I tried a=b) It looks obvious but it's suspicious. It raises question: does it means that stating ab-1 is in H we assume that b-1 exists? Otherwise how can we tell that ab-1 is in H if we don't know that b-1 exists? So, from now on, I'll imply that we know that b-1 exists. But it's for any a, b so it also works for b = a so this means a-1 also exists. Then aa-1=e. But look! it's useless! existance of a-1 is already implying that e also exists. And we don't need to put b = a because b-1 already exists because a can be also b.
If this is exactly how shortcut works: imply that a-1 is in H already proven from the fact that ab-1 is in H is strange. Not obvious nor trivial. And, actually it's then identical to telling that identity and a-1 is in H and only after that you may claim ab-1 is in H, otherwise b-1 in that expression is meaningless.
If this is not how it works, then please explain, I don't get it.
Edit: Oh, looks like I understand now. The thing is: operation on H and G is the same. And we know that e from G is also e for H because operation is same. And we know that if b from H then b is also from G, then we can get b-1 in G and check that it is in H. Then yes, we can put a = b, a = e and so on and we get shortcut working.
p4wn3r wrote:
r57shell wrote:
p4wn3r wrote:
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1.
Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
The group is finite, so obviously it cannot get stuck forever.
No, why 1, a^1, a^2,a^3... can't correspond to 1,2,3,3,3,3,3,3,3,3,3? Each element is from Zn, but it doesn't go back to 1.
p4wn3r wrote:
It's quite general that the set of all powers of an element forms a group, which is abelian and cyclic. If you have doubts, you can check using the method I showed earlier.
I don't have doubts. I know that for any a I'll get back 1 because it is cyclic. I just don't remember proof. What you said doesn't look like a proof. "it is finite" - not an argument unless you use something in addition.
p4wn3r wrote:
r57shell wrote:
Next you use Lagrange's theorem, which is long lecture about orders.
I wholly disagree with this. The proof of Lagrange's theorem boils down to the simple fact that you can define cosets given a subgroup, and they form an equivalence relation with sets of the same size for each class.
So, now we need also knowledge about closets and their size.
p4wn3r wrote:
Anyway, it's my feeling that you don't appreciate the simplicity of the proof because you're not very familiar with abstract algebra.
Probably. Also, I don't know relationship about facts. Which fact comes first. You can't use fact that you'll prove later, for proving fact before, otherwise you'll get logic loop.
p4wn3r wrote:
This takes a long time to get used to, but I, for example, when I read somewhere that the size of a group is n, it immediately comes to me that if I take successive powers of an element a number of times that divides n, I'll get the identity. That's why I didn't even bother to provide a proof of this when I mentioned it.
Doesn't fact with powers go back to 1 work only about cyclic groups?
Now, take an element a and look at the subgroup it generates: {1,a,a^2, a^(d-1)}, where we write d as the size of this subgroup. Notice that we have a^d=1.
You need first prove that it can't stuck somewhere for eternity. Without additional knowledge all we know that all powers are invertible elements of Zn, in other words, values from 1 to p-1, but it doesn't prove that there is non-zero d for which a^d = 1.
Edit: Ah, I missed: you say it generate subgroup: hmm is it trivial?
The group is finite, so obviously it cannot get stuck forever.
No, why 1, a^1, a^2,a^3... can't correspond to 1,2,3,3,3,3,3,3,3,3,3? Each element is from Zn, but it doesn't go back to 1.
p4wn3r wrote:
It's quite general that the set of all powers of an element forms a group, which is abelian and cyclic. If you have doubts, you can check using the method I showed earlier.
I don't have doubts. I know that for any a I'll get back 1 because it is cyclic. I just don't remember proof. What you said doesn't look like a proof. "it is finite" - not an argument unless you use something in addition.
Well, the "something in addition" is in fact right next to the part you quoted, which you left out for some reason.
p4wn3r wrote:
Eventually you reach am=an, and since it's invertible you can cancel out and have am-n=1.
Do you have any doubts about this? It's implicitly assumed that n<m, and the cancellation is justified because you can multiply both sides by a-nr57shell wrote:
Doesn't fact with powers go back to 1 work only about cyclic groups?
No, it works for all finite ones.
r57shell wrote:
When I said "I don't know trivial proof" I meant prove with much fewer facts used, and much "simpler" facts. For example, you could prove Fermat's little theorem using Euler theorem a^phi(n) = 1(mod n) - but it's ugly way.
I don't think so. If you want to prove Euler's theorem using elementary arguments, you first need to prove Fermat's little theorem, then work out the prime power case and use Chinese Remainder Theorem to work out the general case. Unless you can come up with a proof of Euler's theorem that does not invoke Fermat, it's circular to claim that Euler proves Fermat.
Coincidentally, if you use groups, no distinction is needed. That's because phi(n) is, by definition, the size of the multiplicative group mod n. And the group theoretical proof works just fine, just replace p-1 with phi(n), which is the size of the group.
r57shell wrote:
You said ap-1 is trivial fact just from size. Now I wonder, did you mean it's trivial who is familiar with group theory and knows all that stuff about phi(n), Lagrange's theorem, Bezout's identity, orders of elements, orders of groups, subgroups, invertible elements, and so on?
All right, several things:
(*) You don't need to know anything about phi(n) to show this particular statement, I don't know why you're bringing it up here.
(*) Why is knowing Bezout's identity a problem? It's a result used even in elementary number theory, abstract algebra is not the thing that adds it into the problem.
(*) Regarding Lagrange's theorem and orders of elements, it's in the first chapter of any book. I could hardly think of anyone who knows group theory and doesn't know how to apply Lagrange's theorem. If we go to Wikipedia's article about this proof, it says just that: "This proof requires the most basic elements of group theory."
The thesis I am trying to present here (unsuccessfully, it looks like) is that it's more productive to study, and even define, number theory in an algebraic manner right away. It takes some time to familiarize with algebra, yes, but it's much better to study it than spend lots of time on elementary proofs that don't have a suitable generalization, and are much longer.
Eventually you reach am=an, and since it's invertible you can cancel out and have am-n=1.
Do you have any doubts about this? It's implicitly assumed that n<m, and the cancellation is justified because you can multiply both sides by a-n
My bad :( Yes, it's enough to show that it has to come back to 1.
p4wn3r wrote:
r57shell wrote:
Doesn't fact with powers go back to 1 work only about cyclic groups?
No, it works for all finite ones.
Ok, I'm lazy to check.
p4wn3r wrote:
(*) You don't need to know anything about phi(n) to show this particular statement, I don't know why you're bringing it up here.
My point was that sometimes you may bring in some general things and prove by them, claiming that it's easier, and get circular proof. It's not the case, but point was about: general case can be result instead of cause.
p4wn3r wrote:
(*) Why is knowing Bezout's identity a problem? It's a result used even in elementary number theory, abstract algebra is not the thing that adds it into the problem.
It's not a problem. Just additional "dependency" :D So, you need to know it and know its proof.
p4wn3r wrote:
(*) Regarding Lagrange's theorem and orders of elements, it's in the first chapter of any book.
Then we need to study theory of groups first chapter.
p4wn3r wrote:
The thesis I am trying to present here (unsuccessfully, it looks like) is that it's more productive to study, and even define, number theory in an algebraic manner right away. It takes some time to familiarize with algebra, yes, but it's much better to study it than spend lots of time on elementary proofs that don't have a suitable generalization, and are much longer.
I would say even more general: more you know, better tools you have. Usually any proof can be used somewhere else, at least its tricks.
Back to Fermat's Little Theorem. I'll gather all things together:
1) What is group: (closure, identity, invertible - what is multiplicative inverse)
It is first that you use when you claim that all invertible make a group.
oh, something is missing here again. You said it's shortcut to show that it's subgroup. Then we need to have group first (over multiplication!, because subgroups work only over the same operation). Anyway, keep going
2) What is subgroup.
3) Bezout's identity.
Now size of group is proven.
4) Prove that any element a generate subgroup - using this argument with a^n = a^m and finite size & invertible any element. Btw we skipped part of prove that it's indeed subgroup, not just array of powers.
5) Largange's theorem (closets, etc) (first chapter of group theory)
good list to study. Lets say "trivial, just from size of group = p-1" :)
Prove that no partial sum of the harmonic series is an integer (other than, rather obviously, the very first term, which is just 1).
(And by "partial sum" I mean 1+1/2+1/3+...+1/n for any given n.)
Let $a/b$, denote a rational number. Define the p-adic valuation ordp(a/b) of a rational number for a prime p as follows:
If e is the largest number such that pe divides a, and f is the largest number such that pf divides b, then ordp(a/b)=e-f.
It should be clear that the definition above is invariant with respect to all equivalent fractions for a given rational number. Also, ordp satisfies the following property:
ordp(xy)=ordp(x)+ordp(y).
Perhaps more importantly, it satisfies the following (it can be proven writing the fraction sum and taking the lcm):
ordp(x+y) >= min(ordp(x),ordp(y)) (*)
In particular, if ordp(x) and ordp(y) are different, then the previous inequality becomes an equality. This is also simple, if boring, to prove.
Now, denote by Hn the sum of all numbers of the form 1/k, for k from 1 to n, and let us look at ord2(Hn)
Notice that we must have n between two consecutive powers of two, that is, 2m<=n<2m+1.
Now, I claim that the term 1/2m is the term in Hn which has the smallest value of ord2 for all terms in the sum. Indeed, we have ord2(1/2m)=-m, and having a term with ord2(1/k)<m>=2m*c, with c>=2, which is impossible, since any k of this form would exceed n.
Because of this, we can apply (*) to the n terms of the sum, paying attention to the fact that there's only one minimum term, with value -m. Because of this, the sum of this term with any other will generate a number whose p-adic valuation is also -m, and it follows that ord2(Hn)=-m. In particular, Hn is not an integer, because no integer can have negative p-adic order.
Prove that no partial sum of the harmonic series is an integer (other than, rather obviously, the very first term, which is just 1).
(And by "partial sum" I mean 1+1/2+1/3+...+1/n for any given n.)
As I thought, this is a blackpenredpen problem. Anyway, I solved it without looking at how it was done in that video:
Let 2^k be the largest power of 2 which is less than or equal to n (where n>1). Then:
* The largest power of 2 that divides LCM(1,2,...,n) is 2^k (k>=1)
* So the largest power of 2 that divides LCM(1,2,...,n)/2 is 2^(k-1)
* 2^k is the only number from 1 to n that has k (or more) factors of 2
* Therefore 2^k is the only number from 1 to n that does not divide LCM(1,2,...,n)/2
Suppose 1+1/2+1/3+...+1/n is an integer. Multiply it by LCM(1,2,...,n)/2. This causes all summation terms to be integers except the term corresponding to 1/2^k, and so the resulting sum is not an integer. This contradicts that 1+1/2+1/3+...+1/n is an integer, because an integer times an integer is supposed to be an integer.
PS: This is "essentially" how p4wn3r solves it, just in elementary terms.
In a previous post, I mentioned the following result:
If a and b are two positive integers, with gcd(a,b)=1, then the set
S = {ma+nb | m,n integers in Z} = Z
However, if we constrain m and n to be non-negative integers, it's in general not true that the set will be {0, 1, 2, ...}. For this case, however, we still can say something. Namely, if we define f(a,b) the largest integer not expressible by the form ma+nb (or, as a mathematician would say, a linear combination with non-negative integer coefficients), we have:
For any integers a, b greater than 1 such that gcd(a,b)=1, f(a,b) = ab-a-b
Prove this.