Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
arflech wrote:
Also, although I can satisfy that functional equation with x*f(x) equal to a linear function, so that f(x) is of the form P+Q/x, this just brings us back, in the best case, to some weighted arithmetic mean of A and AX/x, which I have already shown to have a problem of lack of well-definition. I'm not sure you quite understand well-definition here: The idea is that there's a set of values, and the easiest way to describe how to get them is to state the value at one point and use some algorithm to get the rest, and this algorithm is well-defined if and only if you get the same set of values regardless of the initial point. If you say that the unit price is A when the size is X, and your algorithm then says that the unit price is B when the size is Y and C when the size is Z (any distinct fixed values of Y and Z within range will do), and then I start that algorithm all over again by starting out with "unit price is B when size is Y," and I don't end up getting A with X and C with Z, then the algorithm is not well-defined, because the set of values you generate depends on the point at which you started generating them.
I do understand what you mean by well-definition. I'm just arguing that the problem of you getting different curves for different data is not related to the means themselves, but you evaluating them using different extremes. You're correct that simply using the extremes at a point (X,A) will yield different results, but in order to find a well-defined algorithm you don't need to explicitly consider them. You only need to consider the extremes at point (1, A') Consider a different problem. The algorithm you want to build will receive only a real number A' and assume f(1)=A'. Because of this, it's easier to find a well-defined function, you just need to make sure f(1)=A'. For example, you can use the extremes f(x)=A' and f(x)=A'/x Now, generalize to your problem. It receives two real numbers X and A, and assumes f(X) = A. This can be reduced to the previous one, pretend that instead f(1)=A' is given and use the same extremes: f(x)=A' and f(x)=A'/x. Now you can find this A' by solving f(X)=A. You don't even need to check if other points in the curve will give the same curve, because of the way it was made. So, this leads us to the conclusion. Given f(X)=A, using a mean of extremes f(x)=A and f(x)=AX/x leads to a possibly not well-defined curve. However, using the mean of f(x) = A' and f(x) = A'/x and manipulating A' so that f(X)=A always gets a well-defined answer. Is there any particular reason you need to explicitly consider the extremes f(x)=A and f(x)=AX/x to make a well-defined algorithm? It only seems to make things harder.
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
Explicitly considering the extremes assures that the algorithm will always be between them. Anyway, I'll try out your suggestion... f(x)=(B/2)(1+1/x) where f(X)=A means A=(B/2)(1+1/X), so B=2A/(1+1/X). Then the derived algorithm is f(x)=A*(1+1/x)/(1+1/X), which is indeed well-defined. However, is it strictly between A and AX/x? It is evident that if x>0 then f(x)>A, which is a problem because if x>X then A>AX/x, so f(x) is not strictly between A and AX/x, and now considering the ratio with AX/x, f(x)/(AX/x)=(1+x)/(1+X), so f(x)>AX/x when x>X (as found earlier), and AX/x>f(x) when X>x, so this algorithm only works if X>x; that is, if the domain is restricted so that X is the maximum possible package size.
i imgur com/QiCaaH8 png
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
arflech wrote:
Then the derived algorithm is f(x)=A*(1+1/x)/(1+1/X), which is indeed well-defined. However, is it strictly between A and AX/x? It is evident that if x>0 then f(x)>A
Huh? The function in question was constructed having f(X)=A, having f(x)>A for x>0 makes no sense. For x>X: x > X 1/x < 1/X 1 + 1/x < 1 + 1/X (1 + 1/x)/(1 + 1/X) < 1 A*(1 + 1/x)/(1 + 1/X) < A f(x) < A
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
opps, I miscalculated
i imgur com/QiCaaH8 png
Skilled player (1651)
Joined: 7/25/2007
Posts: 299
Location: UK
Here's a weird question which came to me while playing Megaman 2. What's the least number of level completions necessary in order to experience all 8 as a final boss? To get what I mean, just realize how the basic game works. You can choose to do the first 8 levels in any order, and so whichever level you do last, you will end up having the most upgrades on. So let's say I want to try each of the levels as being the final one, what would the the most efficient way of doing this? For example, lets say I only wanted to see what Flashman or Bubbleman's stages would be like if completed last. You could just do 2 independent playthroughs, requiring a total of 16 levels to be completed, but that's highly inefficient. A better way would be to complete the 6 other levels first, make a save state, and then complete the two remaining levels in two different orders, each of which would give the desired result of either Bubble or Flashman being last. Doing it this way would require only 6+2+2=10 different levels necessary to get the desired outcome, a clear improvement. But what about 8? Doing 8 independent runs would require 64 levels, so by breaking it up into subsections, what's the minimum amount of levels required to achieve the goal?
Tub
Joined: 6/25/2005
Posts: 1377
Using a binary partition yields 32 levels, half the original number.
12345678
      87
    7856
      65
56781234
      43
    3412
      21
I doubt there's a better way, but I lack proof at the moment.
m00
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
My attempt: For 1 level, the minimum is clearly a1=1. For n>1 levels, we necessarily have to load a state before the first level, that's because if you only savestate after the first level, you'll never finish the game at it. So, we assume that for n>1, we have to use a savestate at the first level. We now try to reduce this problem to a smaller one. Suppose you partition the set of levels into two subsets A and B, and determine that you'll finish the game at the levels in A before the levels in B. There's a simple way to do that whose optimality is probably easy to prove, you traverse all the levels in B in any order, savestate after that and do the optimal process for nA levels. After that, you load a state before the first level, traverse A in any order and do the optimal process for nB levels. It should be clear that the amount of levels should only depend on the size of A and B, and seeing that traversing B, loading a state back to the beginning and traversing A takes n levels, the answer should be an = n + min(ai+an-i) for i = 1, ..., n-1 So, I can compute the first values, put them in OEIS, hope that someone studied this already and find https://oeis.org/A033156 , which luckily has a closed formula.
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
It's well-known that among the unit circles with respect to the p-norms, the circles with respect to the 0- and ∞-norms have the greatest Euclidean circumference (8), while the circle with respect to the 1-norm has the least Euclidean circumference (4√2): https://secure.wikimedia.org/wikipedia/en/wiki/Unit_sphere#Unit_balls_in_normed_vector_spaces What about the circumferences with respect to the respective p-norms themselves? It is trivial to see that the unit 0-circle has 0-circumference 8, the unit ∞-circle has ∞-circumference 8, and the unit 2-circle has 2-circumference 2π, and there is one easy non-trivial example using the unit 1-circle (which, remember, has Euclidean circumference, or 2-circumference, 4√2): The differential element of arclength under the p-norm is ds=p√(|dx|p+|dy|p), and the unit p-circle is the graph of |x|p+|y|p=1; when p=1 these are simplified to ds=|dx|+|dy| and |x|+|y|=1. To make it even easier, a symmetry argument can be used to consider only the portion of the unit p-circle in the first quadrant (then multiply the p-length by 4), where |x|=x, |y|=y, |dx|=dx, |dy|=-dy, and y ranges from 1 to 0 as x runs from 0 to 1. With this further simplification, we have ds=dx-dy and x+y=1, so y=1-x, so dy=-dx, so ds=2dx, so the 1-length of the quadrant of the unit 1-circle is 2, so the 1-circumference of the unit 1-circle is 8. For the general case, our simplification yields ds=((dx)p+(-dy)p)1/p and xp+yp=1, so y=(1-xp)1/p, so dy=-(1-xp)1/p-1*xp-1*dx, so (-dy)p=(1-xp)1-p*xp²-p*(dx)p, so ds=(1+(1-xp)1-p*xp²-p)1/p*dx. At this point, Wolfram|Alpha failed me, but basically I tried to integrate this in x and numerically plot its value in p; another idea is to fire up Mathematica and see whether there is any closed form for int(ds,x,0,1) and then whether there is a closed-form solution for d(int(ds,x,0,1))/dp=0 with respect to p; I tried differentiating in p first and then integrating in x but that's even more difficult. It might also be interesting to see just how robust the limiting relationships between the p-norms, the 0-norm, and the ∞-norm is, like whether the limits of the p-circumference of the unit p-circle as p approaches 0 or ∞ are 8, just as the limit of the p-norm as p approaches 0 is the 0-norm (min. norm) and as p approaches ∞ is the ∞-norm (max. norm). FWIW I also tried doing this in Wolfram|Alpha for the unit 3-circle and got a 3-circumference of about 6.5, which is between 2π and 8.
i imgur com/QiCaaH8 png
Player (80)
Joined: 8/5/2007
Posts: 865
I have a quick question (not a challenge, really) about something that's bugged me for a long time. What, if anything, is the population standard deviation used for? The sample standard deviation is one of the most valuable statistical tools, especially when it is used in conjunction with the central limit theorem (I would argue that all of science ultimately rests on the sample standard deviation). If you have the population standard deviation, however, then you must know the entire population, in which case you could use it directly instead of playing statistical games with it. Furthermore, if the population isn't normally distributed, the population standard deviation does not offer any grand insights and can even lead to spurious results. Is there anything that the population standard deviation can be used for except to very roughly characterize the distribution itself.
arflech wrote:
It's well-known that among the unit circles with respect to the p-norms, the circles with respect to the 0- and ∞-norms have the greatest Euclidean circumference (8), while the circle with respect to the 1-norm has the least Euclidean circumference (4√2): https://secure.wikimedia.org/wikipedia/en/wiki/Unit_sphere#Unit_balls_in_normed_vector_spaces What about the circumferences with respect to the respective p-norms themselves? It is trivial to see that the unit 0-circle has 0-circumference 8, the unit ∞-circle has ∞-circumference 8, and the unit 2-circle has 2-circumference 2π, and there is one easy non-trivial example using the unit 1-circle (which, remember, has Euclidean circumference, or 2-circumference, 4√2): The differential element of arclength under the p-norm is ds=p√(|dx|p+|dy|p), and the unit p-circle is the graph of |x|p+|y|p=1; when p=1 these are simplified to ds=|dx|+|dy| and |x|+|y|=1. To make it even easier, a symmetry argument can be used to consider only the portion of the unit p-circle in the first quadrant (then multiply the p-length by 4), where |x|=x, |y|=y, |dx|=dx, |dy|=-dy, and y ranges from 1 to 0 as x runs from 0 to 1. With this further simplification, we have ds=dx-dy and x+y=1, so y=1-x, so dy=-dx, so ds=2dx, so the 1-length of the quadrant of the unit 1-circle is 2, so the 1-circumference of the unit 1-circle is 8. For the general case, our simplification yields ds=((dx)p+(-dy)p)1/p and xp+yp=1, so y=(1-xp)1/p, so dy=-(1-xp)1/p-1*xp-1*dx, so (-dy)p=(1-xp)1-p*xp²-p*(dx)p, so ds=(1+(1-xp)1-p*xp²-p)1/p*dx. At this point, Wolfram|Alpha failed me, but basically I tried to integrate this in x and numerically plot its value in p; another idea is to fire up Mathematica and see whether there is any closed form for int(ds,x,0,1) and then whether there is a closed-form solution for d(int(ds,x,0,1))/dp=0 with respect to p; I tried differentiating in p first and then integrating in x but that's even more difficult. It might also be interesting to see just how robust the limiting relationships between the p-norms, the 0-norm, and the ∞-norm is, like whether the limits of the p-circumference of the unit p-circle as p approaches 0 or ∞ are 8, just as the limit of the p-norm as p approaches 0 is the 0-norm (min. norm) and as p approaches ∞ is the ∞-norm (max. norm). FWIW I also tried doing this in Wolfram|Alpha for the unit 3-circle and got a 3-circumference of about 6.5, which is between 2π and 8.
I don't want to derail the topic. I think your question is very interesting and I'll give it a quick shot. Edit: I am not sure about your conclusion that the 0-circumference of the 0-ball is 8. The formula for the arclength element breaks down. Even treating it as a limiting case of p-->0+ quickly yields C0=∞. Edit 2: A random thought crossed my mind. I'm not intimately familiar with p-norms, but I do recall from my Real Analysis class that norms are crucial for defining "closeness" and therefore the derivative in arbitrary vector spaces. As such, is the form of the derivative somehow changed here? I suspect not, because your problem lies specifically in one dimension.
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
p is in the real numbers under the usual norm (and at any rate, all of the p-norms are equivalent in one-dimensional space); also, all of the p-norms have the property that if a sequence converges under one norm, it converges under all of the others, to the same limit. Also I somewhat naïvely assumed that just as the limit of the p-norm as p->infinity was the sup norm, so the limit of the p-norm as p->0 would be the inf norm (of course for p<1 it's not a real "norm" because it fails the triangle inequality but whatev), but rather it's equal to ∞ outside the coordinate axes and the absolute value of the nonzero coordinate everywhere else; therefore the arclength element is |dx| if dy=0, |dy| if dx=0, and ∞ elsewhere. Also, the 0-circle turns out to consist of the four points (±1,0) and (0,±1), so its p-arclength is 0 for all p; however, in the sense of "how long it would take to traverse at speed 1 while never leaving it" its length is ∞. I went ahead and tested this out in Wolfram|Alpha for ever-smaller values of p... 0.5: 14.2832 0.1: 67.7318 0.01: 671.135 0.005: 1341.62 0.002: 3353.07 Below that it was too hard for Wolfram|Alpha to deal with, but basically it shoots to ∞ (and beyond!) I also tested some high values of p... 3: 6.519535986117990194 4: 6.793869647256916842428 5: 6.9969136160908981518 6: 7.145366367675450102 7: 7.2569521447953687 8: 7.343369831286500739918 9: 7.41207513289697977651 10: 7.467921618445657 11: 7.5141687345623637 After that it got hard for Wolfram|Alpha to perform a proper numerical integration, and for some very high values (like 99 and 100) it returned just 4.; the plots of the relevant integrand for 11 already shows that it's near 4 for nearly the whole domain, so it could be a sampling error, or it could be that the p-circumference does in fact swing down again to approach 4 in the limit... Anyway, noticing that 2 appears to be the minimum, with minimum value 2π, I tested some values of p near 2... 1.9: 6.28928 1.99: 6.28324 1.999: 6.28319 2: 6.283185307179586476925286766559005768394338798750211641949889184615632812572417997256069650684234136 2.001: 6.28319 2.01: 6.28324 2.1: 6.28818 I have a sneaking suspicion that the minimum value of the p-circumference of the p-circle is indeed found when p=2; in this way our ever-so-homogeneous "natural" Euclidean norm can be said to be optimal. I also suspect that the p-circumference of the p-circle (under an actual p-norm, which means p is 1 or greater) is the same as the q-circumference of the q-circle when q=1+1/(p-1)...that is, when 1/p+1/q=1 (this interesting relation is commonly seen in statements about Lp spaces). In particular... 3/2: 6.51954 4/3: 6.793869647256916842427963755382900314043336012387254400780307334760265054775164310119185196453647871061330302026735936783391190405919742048207766394085 5/4: 6.996691 6/5: 7.14537 7/6: 7.25695 8/7: 7.34337 9/8: 7.41208 10/9: 7.46792 11/10: 7.51417 I suspect some integral-substitution would be necessary to prove this equivalence, but I have a hard time seeing it...but for those who would like to try, substituting p with p/(p-1) in that earlier formula for ds yields (1+(1-xp/(p-1))1/(p-1)*xp/(p-1)²)1-1/p*dx. Maybe the substitution u=1-xp/(p-1) would work, with x=(1-u)(p-1)/p, so that dx=-(p-1)/p*(1-u)-1/p*du, and with the interval of integration shifting from (0,1) to (1,0); this transformation (after flipping the interval of integration back to (0,1)) yields ds=(1+(u-u²)1/(p-1))1-1/p*(p-1)/p*(1-u)-1/p*du, which looks interesting in its own right but isn't the same form as the original... Another interesting idea would be to try to find the p-area of the p-circle; this seems a bit harder because only the 2-norm is even definable by an inner product: http://livetoad.org/Courses/Documents/292d/Notes/norms_and_inner_products.pdf Reading that PDF also led me to wonder whether, if 1/p+1/q=1 and p is at least 1, then the p-operator norm of a matrix A is equal to the q-operator norm of A* (the transjugate of A)...
i imgur com/QiCaaH8 png
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
If the top 1% of 1% of Americans make 6% of the income (in a recent year, they did), and income inequality is uniform in some sense, what portion of the income was made by the top 1% of Americans (and what sense did you use)? How does that figure compare with the actual data? How can you use this thought experiment to get an ideal plot of income (or wealth, or anything else) in a population from knowing only its Gini coefficient?
i imgur com/QiCaaH8 png
Editor, Skilled player (1536)
Joined: 7/9/2010
Posts: 1319
How does the graph of a function f(x)=x^∞ looks like? The interval -1<x<1>1 is ∞. But I figured out that x=-1 is either 1 or -1, depending how much I multiply. Also with x<-1, it'll become ∞ or -∞. This means the graph is mirrored to the x-axis for x≤-1.
Favorite animal: STOCK Gt(ROSA)26Sortm1.1(rtTA,EGFP)Nagy Grm7Tg(SMN2)89Ahmb Smn1tm1Msd Tg(SMN2*delta7)4299Ahmb Tg(tetO-SMN2,-luc)#aAhmb/J YouTube Twitch
Player (246)
Joined: 8/6/2006
Posts: 784
Location: Connecticut, USA
I feel like it would just be 0 for -1 < x < 1. Everywhere else would not be plottable since infinity is not in the real numbers. EDIT: Also, iirc 1^infinity is considered indeterminate so you can't plot that either.
Editor, Skilled player (1536)
Joined: 7/9/2010
Posts: 1319
Yeah of course. I didn't read the rubbish I wrote. :D
ElectroSpecter wrote:
EDIT: Also, iirc 1^infinity is considered indeterminate so you can't plot that either.
edit: Huh, hows that possible? 1^∞ is the same as 1*1*1*1*1*1*1... and this term will never go higher than one.
Favorite animal: STOCK Gt(ROSA)26Sortm1.1(rtTA,EGFP)Nagy Grm7Tg(SMN2)89Ahmb Smn1tm1Msd Tg(SMN2*delta7)4299Ahmb Tg(tetO-SMN2,-luc)#aAhmb/J YouTube Twitch
Editor, Expert player (2073)
Joined: 6/15/2005
Posts: 3282
I assume by 1^∞ you mean lim(n->∞) 1^n. Of course lim(n->∞) 1^n = lim(n->∞) 1 = 1. That being said, you should not use 1^∞ for this purpose. What some people mean by 1^∞ is lim(x->c) f(c)^g(c) for real functions f and g with lim(x->c) f(c) = 1 and lim(x->c) g(c) = ∞. This is an indeterminate form. Anyway, f(x)=x^∞ really should be written f(x)=lim(n->∞) x^n, and I am assuming that "lim(n->∞)" denotes the limit of a sequence instead of a real function, since that appears to be your intention. The function is as follows. - f(1)=1, - f(x)=∞ for x>1, - f(x)=0 for -1<x<1 - f(x) is undefined otherwise If "lim(n->∞)" denotes the real function limit, then f(x) is the same as above except that f(x) is now undefined for all negative x.
Joined: 10/20/2006
Posts: 1248
Show how and why 0,5^(x*y/ln(2)) approximates (1-x)^y for most values of 0 < x < 1 and y being any natural number. It works especially well for large y, in that case it works even for 0 < x < 2. Edit: It should actually be 1-0,5^(x*y/ln(2)) and 1-(1-x)^y it seems. I'm not the best at maths and thought it'd make no difference.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Odd, I get very different results from these expressions. Are they correct? I'll use n instead of y: 0,5 = e^ln(1/2) = e^-ln(2), so the first one is equivalent to e^(-x*n). I'm assuming that to get close results for f(x) = (1-x)^n and g(x) = e^(-x*n) you need to have lim(n->∞) f(x)/g(x) = 1 and lim(n->∞) f(x)-g(x)=0. While the latter is true if you have 0<x<2, the former isn't. an = f(x)/g(x) = (ex(1-x))^n. The sequence is a geometric progression, it'll be constant if its ratio is 1, and will converge to 0 if it's less than 1. For this particular case, the ratio will be 1 if x = 0 and less than 1 otherwise, so for 0<x<2 it'll tend to 0 and (1-x)^n will be much less than e^(-x*n) for large n. For x very close to 0 the rate of convergence will be slower, but it'll still go to 0. I evaluated it for x=0.1 and n=10000, and it's consistent with the limits. As I increase n, it seems to get worse:
http://www.wolframalpha.com/input/?i=0.5^%280.1*10000%2FLog[2]%29

http://www.wolframalpha.com/input/?i=%281-0.1%29^10000
(The URL parser is messing up, so just copy-paste these)
Player (246)
Joined: 8/6/2006
Posts: 784
Location: Connecticut, USA
I've been studying for an actuarial exam I'll be taking soon, concerning probability, and have learned quite a bit about it. One of the most interesting things has been joint probability between two random variables. If you're familiar with this subject, this one shouldn't be too difficult: Let T1 be the time between a car accident and reporting a claim to the insurance company. Let T2 be the time between the report of the claim and payment of the claim. The joint density function of T1 and T2, f(t1,t2), is constant over the region 0 < t1 < 6, 0 < t2 < 6, t1 + t2 < 10, and zero otherwise. Determine E(t1 + t2), the expected time between a car accident and payment of the claim.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
For that type of joint distribution, if t1 is on the x axis and t2 over the y axis, then E(t1) is the x coordinate of the centroid of the figure where the function is positive and E(t2) the y coordinate. Of course, for this case, since the region is symmetric, E(t1) = E(t2). The expected value is linear, so E(t1 + t2) = E(t1) + E(t2) = 2*E(t1). So, the answer is the weighted mean of the centroids of a square and a triangle with "negative area" (multiplied by 2, of course). The centroid of a square is its center and for the triangle its x coordinate is the mean of the x coordinates of its vertices. So, x = 2*(3*36 - 2*(4+6+6)/3)/(36-2) = (324 - 32)/51 = 292/51 ~= 5.7255 Of course, you can solve it through more traditional ways, with marginal distributions, or computing the a.d.f and deriving it to get the p.d.f. They require much more algebra, though.
Player (246)
Joined: 8/6/2006
Posts: 784
Location: Connecticut, USA
Cool! The standard way I've been doing these is using double integrals using the pdf. The joint density is a constant (c), and c x (Area of the shape) = 1 (rule of probability). The area is 34 (graphically we see that there's a triangle of area 2 taken out of the top corner of a 6X6 square), so c = 1/34, which is our pdf. Then we set up the double integrals using both "shapes" (the 4X6 rectangle and the remaining quadrilateral), integrating (T1 + T2)(f(x)) for each. Add them and we get ~5.73!
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
This is a cute problem in game theory, from the popular-mathematics book "Why Do Buses Come in Threes?" The only real "challenge" is translating this word problem into a payoff matrix.
i imgur com/QiCaaH8 png
Patashu
He/Him
Joined: 10/2/2005
Posts: 4043
Seems pretty easy, unless I'm missing something? Justin always wants to take the bus If Tom can assume Justin is a perfect self-interested logician, Tom wants to use the phone
My Chiptune music, made in Famitracker: http://soundcloud.com/patashu My twitch. I stream mostly shmups & rhythm games http://twitch.tv/patashu My youtube, again shmups and rhythm games and misc stuff: http://youtube.com/user/patashu
Joined: 5/30/2007
Posts: 324
Patashu wrote:
Seems pretty easy, unless I'm missing something? Justin always wants to take the bus If Tom can assume Justin is a perfect self-interested logician, Tom wants to use the phone
Yeah, In the pay-off matrix, Justin visiting strictly dominates him phoning, and then it's easy to see that Tom must then choose using the phone. I was expecting it to be at least a mixed strategy, but it's simpler than that.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here's one I made today, I'd be interested to see if there are solutions very different from mine: Give an example of a bounded non-constant analytic function f : R -> R, whose Taylor series around x0=0 converges for the entire real line. Does your example work if you extend the domain and codomain to complex numbers? If it doesn't, either find one that does or prove that no such functions (analytic, bounded, non-constant, Taylor series converging in the entire complex plane) exist. Hint: Get the Taylor series and make a meromorphic function whose Laurent series has the same coefficients, analyze the disc of convergence of this Laurent series and use contour integrals to find properties about the residues. EDIT: Huh, I forgot there's already a theorem in complex analysis that solves this problem immediately and I just found a different proof of it xD. Anyway, to be fair, just name the theorem (no need to prove it) and the challenge is solved.
Joined: 7/16/2006
Posts: 635
In real numbers, the sine function's taylor series converges for all x. It is bounded and nonconstant. In complex numbers, no such function exists. All analytic complex functions either hit every complex number, miss exactly one number, or are constant. Can't remember the name of that theorem.