This isn't a pure physics question per se, but close enough.
I have been trying to find out how much fusion fuel there is in a typical H-bomb, but I cannot find this info. Does anyone have any idea?
Also: Is the fusion fuel pure hydrogen, an isotope, or something else?
The standard hydrogen fuel is a 50/50 mixture of Deuterium and Tritium, since D-T fusion has the highest cross section of any hydrogen fusion.
Also, for the purposes of bombs, D-T fusion has the important property of producing copious fast neutrons. These neutrons cause fissions in the material surrounding the hydrogen. It's actually these fissions which create the energy of the bomb, and not the fusion reaction.
Fusion reactors would greatly prefer to use D-He3 fusion, since it produces protons which can be easily captured in the magnetic field, preventing energy loss. Also He3 is easier to store and easier to find than T. Unfortunately, that extra proton in He3 makes the electrical repulsion that much harder to overcome.
Saying "I cannot find this info" is a bit disingenuous, since it implies you tried to find the info, which you couldn't have, since wikipedia would be your first stop, and wikipedia does have this info, and it's not hard to find.
Anyway, there are two different kinds of "fusion fuel" in a standard "hydrogen bomb", namely a small amount of a 50/50 dueterium/tritium gas mixture (appropriately called the "booster") pumped into the core of the "primary" just prior to detonation, and a much larger amount lithium deuteride (lithium hydride made with deuterium instead of protium, somewhat inappropriately called "fuel") that boosts the "secondary".
Since the primary is compressed by chemical explosives, the much harder to handle and manufacture (but easier to fuse) D/T gas is used to boost the primary. The secondary is compressed by the nuclear explosion of the primary, so the much cheaper and easier to manufacture LiD is used to boost the secondary. (Conveniently, the lithium is totally converted to tritium mid explosion by neutrons from fission of the plutonium "sparkplug" in the middle of the collapsing secondary.)
Note that calling either D/T gas or LiD "fuel" is fairly misleading since the majority of the energy released from a hydrogen bomb, despite the name, comes from fission of uranium (namely the uranium "tamper" of the secondary, which can even be made from depleted uranium, though enriched is better.) The purpose of D-T fusion in nuclear bomb design is to create high energy neutrons (much higher energy neutrons than are created by fission of uranium) that cause a greater proportion of the bomb's uranium (the actual fuel) to undergo fission than would otherwise.
Of course, if you make the secondary's tamper out of non-fissile material, then most of the bomb's power will come from fusion, and such bombs have even been built, but they really have no use other than limiting fallout during tests. (Tsar Bomba, for example, the most powerful bomb ever made, was purposely nerfed in this manner.)
I'm not sure that's true. According to wikipedia it's estimated that 97% of the energy released by the 50-megaton Tsar Bomba was produced by fusion alone. (Sure, the size of the explosion could have been increased much further by using fissile material, but saying that the energy of the bomb is not produced by the fusion is just wrong.)
My main question was: How much fusion fuel there is in a typical H-bomb? I asked here precisely because I could not find this information anywhere. Wikipedia was my first stop.
It's all there, but I'll save you the trouble.
In modern one stage nukes and the primary of modern two stage nukes, the amount of fusion "fuel" used to boost the primary is up to 5 grams of D/T, pumped into the hollow core of the bomb. Smaller bombs use less, and some bombs allow the user to select from a range of explosive yields just prior to deployment. Yield is reduced simply by pumping in less than the maximum amount of D/T. (Not only does less D/T release less energy from fusion, it releases fewer high energy neutrons, which causes less of the fissile material to undergo fission.) At most about 1.7% of the energy released from a one stage nuke will come from fusion.
The amount of LiD used to boost the secondary depends on the size and shape of the bomb in question, the ratio of Lithium-6 to Lithium-7, and the ratio of U238 to U235 in the tamper, but in general you want 4-10 molecules of LiD per atom of uranium in the tamper. LiD will generally account for 6-12% of the total weight of the bomb. Fusion will generally account for 10-20% of the explosive yield of a two stage bomb.
As you increase the number of stages (which are assumed to be identical to the secondary), obviously LiD's contribution to the weight of the bomb asymptotically approaches its contribution to the weight of just the secondary, which is 12-25%. By using a non fissile/fissionable tamper, fusion can account for nearly all of the explosive yield of a multistage bomb. (You will always need a fission bomb for the primary and a fission sparkplug in all subsequent stages, though, so it can never be 100%, barring radical new developments in design.)
Note that for military purposes, a nuclear bomb's size is limited by the fact that you need to be able to deliver it via missile or plane, so in practice no nuclear weapons require more than two stages. (A small number of tests, such as Tsar Bomba, have been conducted with bombs that have three or more stages.)
What is the grand-scale geometry of the universe?
As far as we know, the universe is expanding faster than c, which means that it's not possible to reach the "edge". However, according to the math, this is not necessarily so. In fact, up until quite recently it was unclear whether the expansion of the universe was accelerating or decelerating. A decelerating expansion was a completely valid possibility.
If the universe were decelerating, it would mean that at some point its expansion would be way lower than c, it would eventually stop expanding, and start to contract (according to GR, unless we discover something else, the universe is "unstable" in that there cannot be a static, steady-state universe).
However, if I understand correctly, even if the universe was expanding so slowly, or even contracting, that you could travel towards its "edge", you would nevertheless still never "reach" it. In other words, it's not possible to cross this "edge" and travel outside the universe. This, as far as I understand, is due to the geometry of the universe.
However, what kind of geometry is this, exactly? What would happen if you tried to reach the "edge" of the universe (assuming it weren't expanding faster than c)?
In my layman's imagination every seemingly fixed distance between two masses constantly increases, so the shape would be cloudlike, depending on how masses were distributed during the big bang. The edge should be made up from the smallest possible particles in very small density. This is not meant to be an answer to Warp's question. Just felt like throwing it out there, so I could maybe learn why that idea must be fundamentally wrong, in case it is.
Edit:
http://www.youtube.com/watch?v=7ImvlS8PLIo ;p
@Warp: Modern evidence shows that the Universe's expansion is accelerating; moreover, its geometry is "flat" in the sense that matter density is exactly equal to the critical density.
If you have about an hour, and if you don't mind the religion bashing, you can view this video of a presentation by Lawrence Krauss at the Atheists Alliance International conference of 2009 where he talks about just that, and in a layman-friendly way to boot.
Edit: I could have sworn there was another post by Warp before this post. I will leave the reply for posterity.
The General Theory of Relativity says no such thing. What it says, in fact, is that from the particle's point of view, the harder it tries to avoid falling into the singularity, the faster it will fall -- the unaccelerated trajectory is the one with the longest lapse of proper time (an outside observer is essentially irrelevant as he cannot observe the event, but he would say that the accelerated particle takes longer to reach the singularity based on the math). Any particle, no matter how much energy it has, will inevitably fall into the singularity.
AFAIK that's a misinterpretation of "the universe is expanding". The universe is indeed expanding, but that doesn't mean that every single particle is getting slowly farther and farther apart from each other. Why? Because there are other forces keeping them together: The nuclear forces, the electromagnetic force, and gravity.
AFAIK individual galaxies are not expanding because gravity holds them together. However, galaxies are receding from each other because the gravity between galaxies is too weak to hold them (unless two galaxies are very close to each other, eg. colliding).
I know, but that's not what I was asking. The expansion indeed seems to be accelerating, but AFAIK that's not necessarily so, according to GR (and this was an open question up until quite recently). My question was: If the universe were not expanding so fast, or if it were contracting, what would happen if you tried to reach the "edge" of the universe and try to go "outside"?
I have the understanding that it's not possible, and that this is due to the geometry of the universe. Perhaps something like if you try to reach the "edge" you just end up "running in circles" or something. I am asking what exactly happens if you try to reach the "edge" of the universe. What exactly is the geometry of the universe?
I would be interested in knowing what you are responding to here.
There are 2 basic cases for the universe with no cosmological constant, depending on the critical density: (1) expansion forever (slowing down) and (2) expansion followed by contraction. (1) is further subdivided into (1a) below the critical density and (1b) exactly at the critical density. Case (1a) has a space that is hyperbolic at large scales; in case (1b), space is Euclidean, and expansion asymptotically approaches a rate of zero; and case (2) has elliptical space. In none of these cases you have an "edge" to space; in fact, space is infinite right after the Big Bang in cases (1a) and (1b).
In this discussion, I have been assuming the Friedmann metric. More refined models are available, but they are based on inflation and ever-increasing acceleration, so they aren't relevant to your query -- and I don't know much about them anyway.
While reading about the Hubble Ultra Deep Field image, I got thinking: How exactly can we define the distance between two galaxies?
"How far is that galaxy from us" is actually a difficult question to answer because it's very ambiguous. What we see now is how the galaxy was millions, even billions of years ago (from our perspective), not how it is currently. Moreover, the galaxy is receding from us, and hence the distance has increased significantly between the time that it sent those photons and when we received them.
Not only that, but the geometry of spacetime itself can change. What was once the shortest path between two points (iow. a geodesic) might not be anymore. The shortest path might be different now than it was back when those photons were emitted.
We could estimate how long it took for the light to reach us, and from that how far the galaxy was from us when those photons were emitted, However, we have moved away between the emission of the photons and our detection of them, so that has to be taken into account in the calculation (iow. our true distance from the galaxy was shorter when those photons were emitted than the path they took before they reached us). (I think some terms related to this are comoving distance and proper distance.)
To complicate things even more still, defining what the distance is now is not an easy question either. That's because "now" is different in different frames of reference. Time passes at different speeds at different places (for example the mass of the galaxy may affect how you experience the pass of time). Defining "now" is complicated.
So what is the best way of defining the distance between two galaxies?
I expect the best way to define distance between two galaxies would be either "if a photon were to exit Galaxy A right now, how long would it take that photon to reach Galaxy B?" or "how long ago did a photon that just arrived in Galaxy B exit from Galaxy A?" But really it depends on context. Using hypothetical lightspeed communications, the former question tells you how long it takes to send a message, while the latter tells you how long it takes to receive one. Those are the usual kinds of questions we care about for more conventional distances.
(And substituting whatever more precise position you want for Galaxy A and Galaxy B)
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
But that's the thing: During the travel time of that photon the distance between the two galaxies will increase. Hence the travel time will not tell either the distance at the moment when the photon was sent, nor when it arrives (because the distance is changing all the time).
(Ok, there are a few exceptions where a pair of galaxies are actually moving towards each other, as their speed towards each other is greater than the metric expansion of space between them, but that's an exceptional situation. Either way, even in that case the distance is changing between sending and receiving the photon.)
Also, as said, "right now" is an ambiguous term because of relativity of simultaneity, and time dilation near gravity wells (and probably other factors).
"How long would it take for a photon to travel from galaxy A to galaxy B" does not really answer the question of "what's the distance between A and B?" (because the question is ambiguous; distance when, from whose perspective, etc.)
Here's an easy (well, relatively speaking...) question I've been working on in my spare time: What is the probability of finding a particle in a quantum harmonic oscillator inside the classically forbidden region? This is a classic question in quantum mechanics if it's specified that the particle is in the lowest energy eigenstate. I make no such restriction. I would like to know the probability as a function of the energy quantum number, n, and if the math is conducive to it, perhaps as a function of the (complex) coefficients in the energy/number basis for a mixed state. I especially would like to know an approximating function to the rate of convergence of the probability (presumably to zero) as a function of n.
I've already got a Mathematica script working and I've made good progress, but a few other things are on my plate at the moment. Anyone else want to jump in on this?
Given that the classically allowed region ranges from xi = -1 to xi = 1, the answer would depend on the integral of the wave-function in this interval, the problem would be just to find a convenient way to express the n-th Hermite polynomial (actually its square, but you get the idea).
However, even for the fundamental mode you have to look at the normal distribution tables to compute it, since it can't be integrated analytically, so it gets complicated to get the exact value of the probability. I think a good way to start is to use integration by parts and get the value as a function of something in the normal distribution table. It doesn't look simple though, since you must have a good bound for the n-th value in order to find the convergence rate, and Hermite polynomials are not fun to work with.
I'm a step ahead of you, but you're catching up quickly. I had the most success when I nondimensionalized everything, at which point the bounds of the integral go from -sqrt(2*n+1) to sqrt(2*n+1). I was able to express the core of the answer in terms of some integral of the nth Hermite polynomial squared, plus a number times the error function. There are some other factors in there, but as I recall, they're just constants (sqrt(pi) and 2^n*n!). After having Mathematica compute the first fifty terms, I was a little surprised to find that there isn't a straightforward approximation for P(n). I had expected something like a 1/n relation, but it seems to be dying off much more slowly than that. (I just checked, and there is a decent chance it is a 1/sqrt(n) relation, though it's hard to tell and I'd like to prove what it is.)
If it is approximated by a 1/sqrt(n) relation, I would like to derive the coefficient of that term. If not, I'd like to find a sort of Laurent series or maybe even use the method of Frobenius or use a Laplace transform-- anything that's appropriate for describing a decreasing function like this one.
The fact that it decreases so slowly is sort of appealing to me, since it implies contradictions between quantum and classical mechanics are within nearer grasp than it would first seem.
There's something that I have never been able to understand in general relativity, and I would be really grateful if someone could help me understanding. (And while I'm not afraid of math, I would like to understand this visually rather than through equations, so please try to help me visualize how this works.)
If I understand correctly, according to the theory gravity is not caused by a spooky force, but it's in fact a result of the geometry of space-time: Free-falling objects are solely affected by the basic law of inertia, iow. they will follow the shortest path (iow. a geodesic) unless acted upon by a force (deviating from the geodesic would mean acceleration, which won't happen unless a force causes it to happen), but since space-time is curved close to a mass like the Earth, these geodesics are curved, and thus as objects traverse through time, they will follow the geodesics that look to us like curves.
Anyways, what I do not understand is why the geodesics are different for objects moving at different speeds. For instance, if I throw an object horizontally at 1 m/s it will follow a drastically different geodesic than if I had thrown it at 2 m/s.
I fail to grasp why shortest paths depend on velocity. In a purely Cartesian coordinate system the shortest path between two points does not depend on how fast something is moving between those two points. Likewise even if we have a curved surface, such as the surface of a sphere, the shortest path between two points on that surface will not depend on the speed of an object traveling that path.
I'm suspecting that the problem here is that I'm thinking three-dimensionally. My brain is completely incapable of visualizing a four-dimensional coordinate system (even a Cartesian one, not to talk about a curved one!), so there has to be a way to simplify this.
The classic way of simplifying this is to compress one of the dimensions to zero size, so that eg. the original x and y axes are still the x and y axes, but the z axis is compressed to zero size, and the fourth t axis can now be visualized as a third dimension. But then what? I don't know how to proceed from here in order to visualize the geodesics in a curved space that depend on the speed of the object.
(The classic "gravity well" picture is imaged like above, with a funnel-like surface. However, even there shortest paths do not depend on speed, so there has to be something more to it than that.)
General relativity is really not my forte (though I try to learn it every now and then), but I think I can help you at least a little. I believe your confusion comes from considering different geodesics through space, not through spacetime. Take your classic Minkowski space-time diagram and wrap it around a cylinder with the time axis parallel to the cylinder's central axis. Now consider the trajectory (through spacetime) that two objects with different velocities will take. An object with zero speed will travel in a straight line up the cylinder. An object with infinite speed (neglecting the finite speed of light) will spiral around the cylinder in a circle. Intermediate speeds will affect the pitch angle of the helix.
This is an extremely simple example (and unlike you, I am afraid of math!), but it illustrates that different geodesics follow from different initial velocities.
Edit: Misner, Thorne, and Wheeler discuss the exact issue you raise on pages 32 and 33 of Gravitation. They point out that the two objects' trajectories are similar through spacetime and neither has any curvature. That's all I can help you, I swear.
I think that's still open. IIRC the current consensus is that experiments contradict neither interpretation. It may be caused by gravitons, or it may be caused by an actual bending of space. At the scales at which gravity is measurable with current equipment, both interpretations yield the same results.
The best way to think about it visually is this: speed in space-time is analogous to an angle in an Euclidean space (strictly speaking, it is the rapidity -- the hyperbolic arctangent of speed -- which is analogous to an angle); a Lorentz transformation is exactly analogous to a rotation in Euclidean space (but since space-time is hyperbolic, it would have to be an imaginary angle).
You can visualize this easily using the surface of a sphere (Misner, Thorne and Wheeler use an apple in Gravitation, but the result is the same). In case 1, you have unit speed in one direction; if no forces act on you (other than those that constrain you to move on the surface of the sphere), you will travel in a great circle (great circles are the geodesics of a sphere).
In case 2, you have unit speed at an angle to case 1. You will still travel along a great circle, but it is a different one. If the direction of the first great circle is "time" and the direction orthogonal to it is "space", the angle from the first great circle is related to your spatial speed (how much you move in space per unit time); different spatial speeds, different geodesics.
That answer is utterly wrong (and irrelevant for the question) given Warp's exact words; I highlighted the key terms in Warp's post which show this.
The best way to think about it visually is this: speed in space-time is analogous to an angle in an Euclidean space (strictly speaking, it is the rapidity -- the hyperbolic arctangent of speed -- which is analogous to an angle); a Lorentz transformation is exactly analogous to a rotation in Euclidean space (but since space-time is hyperbolic, it would have to be an imaginary angle).
Could it be visualized like the following?
If we think about one of the three spatial dimension having been completely flattened, so space is a plane, we could think of the Earth as a big circle traversing along the time axis. The object we are examining travels on this same plane alongside the Earth, along the same time axis. If we just let go of the object, it will start moving towards the center of the circle because space curves towards it as both objects traverse on the time axis, so it will make a straight line towards the center of the Earth when projected on the space plane, but a parabolic curve when taking into account the time axis as well (ie. if we examine the trajectory in this "three-dimensional" setting).
Giving the object an initial horizontal speed causes it to move on the spatial plane (tangential to the border of the Earth at first) and hence it will follow a different space-time curve towards the center of the Earth. This will cause it to move with respect to the surface of the Earth and hit a different point.
If the initial speed of the object is large enough, its geodesic will actually escape Earth's gravity well (iow. as the Earth and the object advance in the time axis, the objects' initial speed is so large that when it follows the geometry of space, the shortest path leads away from the Earth).
If this visualization is (even close to) correct, then how does the maximum speed c step in into this? Is c kind of "infinite" speed, but it still nevertheless follows a curved path because that's the shortest path even at "infinite" speed? (In other words, if light were to try to travel "straight" it would have to actually deviate from the shortest path?)
The visualization is correct, with the caveat that it is space-time, not space, that curves.
The maximum speed c is what differentiates between Newtonian gravity and GR: you can talk about Newtonian gravity as curvature of space-time just as you can for GR. The fact that GR (or SR, for that matter) has a universal speed limit just gives a different structure for space-time, for example by allowing one to define a (pseudo-)distance function in space-time that gives observer-independent "distances", something which is impossible in Newtonian space-time: you can define a distance between different points at the same time or between the same point at different times, but not between two different points at two different times. Being able to define such a space-time distance is important: it is basically what allows a universal limiting speed to be possible.
And an important nitpick: geodesics are not the shortest path between two points, but the extremal paths. In ordinary Euclidean space, these two mean the same thing; but in space-time, things get tricky: paths followed by normal particles are actually the paths with the longest "distance" (lapse of proper time) between the two points ("timelike geodesics"); for more "spatial" separations, it is the shortest "distance" (proper distance) between the two points ("spacelike geodesics"); and for light, it is zero ("null geodesics" or "light-like geodesics").
Is this what causes time to pass at different speeds at different altitudes?
It is a factor, yes, but the real cause of that phenomenon is the universal limiting speed (which induces the space-time distance).
Gravity being the curvature of space-time (instead of just space) causes other things more directly: for example, it is the cause of the factor of 2 difference in the gravitational lensing between Newtonian gravity and GR.