Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Please stop spamming the science reference desk with nonsense.
Line 567: Line 567:


:One kind of obvious point that no one seems to have mentioned is that it could be that your friend's car pings at 94 octane, but not at 97. Yes, he would notice ''that'' difference, for sure. It seems a ''little'' unlikely because it's usually very high-compression engines that want the very high octanes. On the other hand, I believe that old, dirty engines are more prone to pinging, so it's not totally beyond the realm of belief. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 02:14, 22 February 2011 (UTC)
:One kind of obvious point that no one seems to have mentioned is that it could be that your friend's car pings at 94 octane, but not at 97. Yes, he would notice ''that'' difference, for sure. It seems a ''little'' unlikely because it's usually very high-compression engines that want the very high octanes. On the other hand, I believe that old, dirty engines are more prone to pinging, so it's not totally beyond the realm of belief. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 02:14, 22 February 2011 (UTC)

== In this video, how does this blind guy walk without a [[guidestick]]? ==

http://www.youtube.com/watch?v=RYRF5-zdY8Q

When he asks what year [[Yale]] was founded, the answer was right in front of him so we could assume that he's blind. However, when he leaves, he doesn't pick up a guidestick.

How do blind people nowadays know how to walk around without guidesticks? --[[Special:Contributions/70.179.187.21|70.179.187.21]] ([[User talk:70.179.187.21|talk]]) 22:08, 21 February 2011 (UTC)

:Please do not spam the reference desk with nonsense. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 22:12, 21 February 2011 (UTC)

:Isn't it clear that the vid is a satire of a Yale Admissions musical? They try everything 6 ways from Sunday to make it hilarious, including asking a question the answer to which is right in front of him. I don't know the term describing that tactic, but [[obvious humor]] maybe? --[[Special:Contributions/70.179.169.115|70.179.169.115]] ([[User talk:70.179.169.115|talk]]) 03:22, 22 February 2011 (UTC)

= February 22 =

== checks ==

is there a way to do a free background check on someone? most sites want money
example http://www.publicbackgroundchecks.com --[[User:Tomjohnson357|Tomjohnson357]] ([[User talk:Tomjohnson357|talk]]) 01:23, 22 February 2011 (UTC)
:yes, if you put quotation marks around the peron's name "like this" then only people with that exact name will be returned. if it's a common name, try adding, one at a time, anything you know about the person, such as a city he or she has lived in, a school they have gone to, etc. If you really want to go into detective mode, add them as a friend on facebook. [[Special:Contributions/109.128.213.73|109.128.213.73]] ([[User talk:109.128.213.73|talk]]) 01:43, 22 February 2011 (UTC)

Revision as of 03:32, 22 February 2011

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 18

NMR is a kind of flouresence?

"Fluorescence is the emission of light by a substance that has absorbed light or other electromagnetic radiation of a different wavelength." vs "Nuclear magnetic resonance (NMR) is an effect whereby magnetic nuclei in a magnetic field absorb and re-emit electromagnetic (EM) energy."

Since radio wave energy is directed towards nuclei in NMR spectroscopy, and radio wave energy is reemitted from the nuclei, one could say that NMR is a kind of flourescence in the radio wave spectrum?

The term Fluorescence (note spelling) is typically reserved for light absorption and emission arising from electronic energy levels (and indeed, the first sentence of Fluorescence#Photochemistry bears this out) rather than nuclear effects, as in NMR.

L-Glucose

Why haven't food companies replaced regular sugar with L-Glucose? It tastes the same as sugar but it can't be metabolized, so it can't cause weight gain (or cavities, for that matter, since bacteria can't metabolize it either). --75.15.161.185 (talk) 00:09, 18 February 2011 (UTC)[reply]

We have an article on L-Glucose, which says, "L-Glucose was once proposed as a low-calorie sweetener, but was never marketed due to excessive manufacturing costs." —Bkell (talk) 00:17, 18 February 2011 (UTC)[reply]
And from that same article, "also found to be a laxative". Quick google search suggests that it's sweet, but not "same sweetness per amount" so one would have to adjust recipes, and that it does not taste identical, so again would not be a simple swap-in replacement. While those aren't fatal flaws (all artificial sweeteners have similar concerns of sweetness, cooking qualities, solubilities, specific taste characteristics, etc.), the more problems there are, the less likely it is that someone will try to overcome them as well as try to commericalize a more expensive material. DMacks (talk) 01:05, 18 February 2011 (UTC)[reply]

Particulate Matter and Heat Convection [?]

I have a bit of a 'thought exercise' concering heat transfer and how it works with such substances as smoke and dust. (It's a problem that is driving me crazy.)

Suppose that there is a solid cube that is open at one end; to wit, 5 of the 6 sides are made of a material that effectively keeps the cold out (eg. gore-tex or polyprolene). And let us suppose that the 6th (open) end of said box is filled up by a thin "screen" of thick smoke or dust making it impossible to see inside. Let us also suppose—for purposes of this exercise—that no matter how hard the wind blows, the "screen" of smoke or dust will not move or blow away; rather, it will remain a thin barrier between the inside and outside of said box.

Now, the temperature inside the box stands at a comfortable 72 degrees (22 Celsius), but the temperature outside the box is a chilly 25 degrees (-4 Celsius). And there is no internal heat source in the box.

Over time, would the temperature inside the box come to match the temperature outside? Or, would the barrier of smoke (or dust) prevent the loss of heat? Also, would the results be any different if the particulates in question were of different substances?

Thank You for reading this! Pine (talk) 00:24, 18 February 2011 (UTC)[reply]

Smoke which is opaque at the visible wavelength isn't necessarily opaque at infrared or other wavelengths, but, I guess, you mean for us to assume that it is. If so, then that would stop all radiation heat transfer, and, since this magic smoke screen somehow resists any attempt to part it, that would also stop convection. However, there would still be conduction, meaning the smoke particles adjacent to the inside of the box would be heated up, then would transfer this heat to particles farther out, by direct contact, until the outside smoke particles were heated up enough to heat the outside air. Conduction is normally quite slow relative to convection and radiation, but you'd still eventually end up with everything the same temperature. StuRat (talk) 00:45, 18 February 2011 (UTC)[reply]
Define "opaque". If the smoke is simply a black body, then it will absorb infrared radiation from the warm surfaces inside the box, and by direct contact with the air, and it will emit infrared radiation also. The radiation it emits will come out in all directions, thereby removing energy from the box. Similarly, smoke that merely scatters light from inside will not prevent energy from escaping. Now if it is both smoke and mirrors (say, a swarm of nanobot corner reflectors tethered in formation) then you have a mirror around a warm spot - essentially a space blanket, effective but not as good as a thermos, since heat is still readily transmitted by contact. Wnt (talk) 18:20, 18 February 2011 (UTC)[reply]

Aldehydes and ketones

Why are aldehydes classified as a separate type of compound as ketones, even though they're just 1-ketones? --75.15.161.185 (talk) 00:42, 18 February 2011 (UTC)[reply]

That H allows them to undergo reactions that ketones cannot (example: oxidation to carboxylic acid) and they are generally less stable than ketones (so even the reactions they do that are "same as ketone" are often faster). The H also has distinct properties that have no analog in ketones (spectroscopic signals, chemical reactivity, etc.). At some point, it becomes progressively sillier to give overly specific names based on subtle differences ("ethyl ketone" vs "methyl ketone" perhaps?) but presence vs absence of a certain type of bond or atom is usually important enough to mention. DMacks (talk) 00:59, 18 February 2011 (UTC)[reply]

Organism that acts as an air pump

This is a pretty odd question, but I'm hoping someone might have some idea. I'm trying to find some sort of organism (for research), of any size, single or multicellular, that somehow pumps air from one level to another, for example a social algae that removes CO2 from the atmosphere and deposits it underground. It is most important that it transports some sort of gas, though the more/more different types of gas the better. Also if it's a larger organism, structure isn't important, as the research will be built around the organism. Is there anything that fits this bill? Thanks! 64.180.84.184 (talk) 01:28, 18 February 2011 (UTC)[reply]

I don't know if this counts but bees will fan their wings in order to circulate air in the hive. Ants construct their structures with an eye to local winds in order to ventilate the nest. The Mudskipper will dig a hole and keep an air pocket (actually lots of animals will do that). There are animals that collect air bubbles for their nest. Like the Paper Nautilus [1] And here are some more. Ariel. (talk) 01:55, 18 February 2011 (UTC)[reply]
I wonder how much air you could get a diving bell spider to transport if you offer it unlimited food, but sneakily insert a tube into its underwater air supply. But rather than enriching UF6 I'd prefer using an electric eel... ;) Wnt (talk) 03:21, 18 February 2011 (UTC)[reply]

Hmm... I didn't even consider organisms that literally transported the air by moving! I don't think this will work though unless the process is passive. 64.180.84.184 (talk) 05:40, 18 February 2011 (UTC)[reply]

Blowfish. Cuddlyable3 (talk) 10:19, 18 February 2011 (UTC)[reply]
The gas glands of physoclistous fish pumps gases and can achieve remarkably high pressures. Page 45 gives an explanation as to how they work. Also, you can eat the experiment once you're finished.--Aspro (talk) 15:15, 18 February 2011 (UTC)[reply]
Fucus (with pneumatocysts) transport minute amounts of air. I imagine that with just the right environment you could do selection cycles for lumps of fucus that develop buoyancy in the least amount of time - who knows how far it could progress? Wnt (talk) 16:51, 18 February 2011 (UTC)[reply]
Algae often have pyrenoids which actively pump CO2 into them. The article is pretty useless, but this thesis has a lot of details. SmartSE (talk) 17:21, 18 February 2011 (UTC)[reply]
This post on a garden forum describe a slimy algae that rises and sinks in a pond during the day. I would imagine gas is involved in its changing buoyancy, possibly with exchange to the air. EverGreg (talk) 20:21, 18 February 2011 (UTC)[reply]

Permeability

what is the value of magnetic permeability(μ) in CGS system of units?—Preceding unsigned comment added by Rk krishna (talk) 05:36, 18 February 2011

ummm, 1? 213.49.110.218 (talk) 05:45, 18 February 2011 (UTC)[reply]
(EC)The conversion factor from SI units is 4π x 10-7 according to this [2]. Mikenorton (talk) 05:50, 18 February 2011 (UTC)[reply]
This is the conversion from CGS EMU to SI, not from SI. SpinningSpark 01:22, 19 February 2011 (UTC)[reply]
For the record, this is why CGS is and remains awesomely awesome and you blokes who use MKS for your resistors and capacitors ought to be put into forced labor camps until such silly conventions are thoroughly beaten out of you. SamuelRiv (talk) 07:11, 18 February 2011 (UTC) [reply]

I changed the section title for easier reference. Cuddlyable3 (talk) 10:14, 18 February 2011 (UTC)[reply]

There are (at least) two versions of the CGS system when it comes to electromagnetic quantities: namely the electrostatic (ESU) and the electromagnetic (EMU) system of units (there are others, but they have the same result as EMU for permeability). In the EMU system, as stated by 213.49.110.218 above, the value of μ0 is 1. In the ESU system, however, it is approximately . See this table. SpinningSpark 15:41, 18 February 2011 (UTC)[reply]

fridge

how come the engine dosent burn out on those open fridges in stores that have yogurt ect theres no door

They make the engine larger so it can handle the increased demand. Then it can cycle on and off like a normal compressor. And BTW, I don't think they would burn out even if you did run them continuously. Starting is probably the action that wears a motor the most, just letting it run doesn't harm it (much). Ariel. (talk) 07:18, 18 February 2011 (UTC)[reply]

so can i leave my fridge open for weeks?

Without burning out the motor (which will run continuously), probably yes. But since your household fridge (presumably) wasn't designed to work with the door open, it won't be able to achieve a normal fridge temperature, will consume a lot more electricity for which you will pay, and will likely ice up quickly and have to be defrosted. Stores have to balance increased power consumption (and larger and more capitally expensive motors) against the convenience of their customers not having to look through, and open and close, doors, which would reduce sales and revenue. Also, an open fridge markedly cools the air in its vicinity, which is acceptable for customers briefly visiting an area of a store, but would likely be uncomfortable in your home. 87.81.230.195 (talk) 08:13, 18 February 2011 (UTC)[reply]
... and many supermarket fridges have closed glass doors. The open ones are designed to reduce escape of cold air (unlike household fridges). Dbfirs 08:41, 18 February 2011 (UTC)[reply]
Note that leaving open the door of your kitchen refrigerator for weeks is a way of making the kitchen warmer not cooler, and is likely to spoil your yogurt, etc. (short for et cetera) Cuddlyable3 (talk) 10:11, 18 February 2011 (UTC)[reply]
True: I was unconsciously assuming extraction of the warmed air, which is unlikely to be installed on a household fridge. 87.81.230.195 (talk) 15:15, 18 February 2011 (UTC)[reply]
When I worked at a grocery store, back in my teenage years, the coolers with the open front were designed where cold air blew out a vent in the front and was aimed up and back. This seemed to make a kind of barrier that helped to prevent the cold air already in the cooler from escaping easily. Googlemeister (talk) 14:23, 18 February 2011 (UTC)[reply]
Yup...it's a type of air door design. DMacks (talk) 16:58, 18 February 2011 (UTC)[reply]

biochemistry

the synthesis of ATP by photosynthetic system is termed as????

Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. -- P.S. Did you see our article on photosynthesis? ---- 174.21.250.120 (talk) 16:37, 18 February 2011 (UTC)[reply]

Blocking gravity?

Is it impossible to block gravity or didn't we just discovered how to do it? If it's impossible, what does make it impossible? — Preceding unsigned comment added by 83.54.216.128 (talkcontribs)

  1. It is impossible.
  2. That's just the way it is.
Dauto (talk) 14:06, 18 February 2011 (UTC)[reply]
One of the reasons that it's impossible is that there is no such thing as negative (gravitational) mass. Technically speaking, you can't "block" the electrostatic force either. However, what you can do is counter the attractive/repulsive force with an equivalent amount of opposite charges. That's why there's no electrostatic attraction between the earth and the moon, despite the electrostatic force being many orders of magnitude stronger than the gravitational force. Protons equal electrons and the net charges are zero. The materials which "block" electrical fields do so by reacting to the electrical field, setting up their own, opposite direction field, which cancels out the original field. This they do by separating the positive and negative charges. There's no such thing as negative mass, so you can't counter gravity with an equivalent amount of it, nor can you set up an opposing field with positive/negative mass separation. -- 174.21.250.120 (talk) 16:08, 18 February 2011 (UTC)[reply]
The current understanding of gravity doesn't allow for it to be blocked. But, science fiction does because it can consider gravity to be transmitted by particles called gravitons. Because they are particles, you can block them. Then, gravity cannot be transmitted. The problem with this is that every object would emit gravitons. So, if you place a huge gravity shield between you and a planet to block gravity, you'd then be attracted to the huge shield. -- kainaw 14:07, 18 February 2011 (UTC)[reply]
Though one can imagine science fiction scenarios whereby the gravitons are deflected by means of something massless. And if the deflection device was of considerably lesser mass than the thing it was deflecting, that would still be pretty useful. But I'm not saying that any of this is really science. --Mr.98 (talk) 14:55, 18 February 2011 (UTC)[reply]
You might find antigravity and gravitational shielding a good place to start. Among other things, such devices might allow construction of perpetual motion devices—something that the Universe generally frowns upon. TenOfAllTrades(talk) 14:11, 18 February 2011 (UTC)[reply]
One possible form of dark energy (the one which I believe in) is something in galactic voids which exerts an anti-gravity force and thus both pushes galaxies together and also increases the rate of the expansion of the universe. (A different form of this theory has the dark energy constant everywhere, which still accounts for the accelerating expansion of the universe, but wouldn't force galaxies together as much.) If either form of the theory is correct, then it might, in the distant future, be possible to concentrate the source of this dark energy to create a strong anti-gravity field in a certain location, strong enough to cancel the gravity of a planet or star. StuRat (talk) 18:52, 18 February 2011 (UTC)[reply]
Particles aren't in my field (pun), but gluons are self-interacting strictly-attractive particles and thus, if produced freely, can block each other. The point is merely that there are, in accepted theory, ways to affect a field with certain properties similar to gravity, so not knowing really anything about the gravitational charge-carrier graviton is quite a hinderance to a definitive answer to your question. We are safe in saying, however, that nothing within what we currently know would do such a thing.
One important thing with speculative/science fiction on anything like antigravity, however, is to make sure it doesn't violate more fundamental laws of nature, particularly conservation of energy. When you start puncturing a bunch of holes and tubes in spacetime, we can talk about local violations of this law, but otherwise it has to be obeyed by the very definition of the universe itself. SamuelRiv (talk) 21:32, 19 February 2011 (UTC)[reply]
Samuel, I don't think your assertion that free gluons would be strictly attractive is correct. In fact I think (granted, without doing any of the math) that two gluons with identical color pattern would repel each other. Dauto (talk) 00:35, 20 February 2011 (UTC)[reply]
There's math involved? I thought particle physicists usually just bastardize group theory and then ask for billions of dollars because they claim it's "fundamental". ...I'm being a jerk, of course, and I defer to your point Dauto that I have no idea what a free gluon would look or behave like, other than what a prof once told me about making a "gluon lightsabre". SamuelRiv (talk) 09:11, 21 February 2011 (UTC) [reply]

Atom

Are there any circumstances where the distance from nucleus to electron shell within the same atom changes? — Preceding unsigned comment added by 165.212.189.187 (talk)

1st: What does that question's got to do with gravity? 2nd: I don't think I fully understand your question. Can you elaborate alittle bit more? Dauto (talk) 15:46, 18 February 2011 (UTC)[reply]

1. I dont know yet. OK, Is the distance from the inner/outer (take your pick) electron shell to the nucleus constant under all possible conditions in which a particular atom can exist? or can it be different depending on certain circumstances? — Preceding unsigned comment added by 165.212.189.187 (talk)

If you place an atom in an electric field the electron will preferentially be found on one side of the nucleus (polarisation). I'm not sure whether that changes the average distance of the electron from the nucleus. --Wrongfilter (talk) 16:16, 18 February 2011 (UTC)[reply]
Also, if the atom is part of a molecule the electron may spread out into a molecular orbital which is larger than a atomic orbital. Dauto (talk) 16:22, 18 February 2011 (UTC)[reply]
First off, the concept of "distances" in atoms is a little undefined. Because of the quantum nature of subatomic particles, electrons are simultaneously infinitesimally close to the nucleus and infinitely far from the nucleus. The probability varies by radius and is vanishingly small at the extremes, so there's a bounded region where most of the probability occurs, but that region doesn't have a sharp boundary. (Where do you place the inclusion cutoff? 50% probability 90%? 95%?) That said, the shape and location of the probability distribution is effectively governed by the mass of the electron, the amount of energy it has, the number of electrons it's sharing its space with (cf. Pauli exclusion principle) and the electromagnetic forces acting on it. In the ground state and the absence of an external electromagnetic field, the electromagnetic forces are determined by the number of protons in the nucleus (which element it is), and the number of electrons. This means when you ionize an atom, you change the electron shells - not just the shell which the electron is added/removed from, but all the electron shells. The amount of energy is also important. If you excite an electron (say by it absorbing light), it changes its probability density. This is usually approximated by saying the electron jumps to a different electron shell, but there's a reorganization of all of the shapes and sizes of all of the electron clouds. If you're talking about atoms in molecules instead of isolated atoms, you throw in a host of other complications, as you're not only adding in extra electrons, you have "external" fields from the other atoms in the molecule. (Complicated to the extent that molecular orbitals (electron shells) are more than just a collection of "atom orbitals" and "bond orbitals".) - All that said, within any particular set of conditions, the electron shells should be of constant size and shape. That is, for example, a ground state, neutrally charged neon atom in the absence of any external electromagnetic field will have electron shells exactly the same as any other ground state, neutrally charged neon atom in the absence of any external electromagnetic field. -- 174.21.250.120 (talk) 16:36, 18 February 2011 (UTC)[reply]
How can an electron be "infinitely far" from anything, let alone the nucleus? Matt Deres (talk) 17:37, 18 February 2011 (UTC)[reply]
The "location" is described as a continuous probability function. Any finite limit you place on the distance would be an arbitrary cutoff on that function. The value of the function assymptotically may approach zero, but it doesn't actually permanently reach zero at a certain definite distance out, so again you can't say you will only consider it to be defined up to that certain distance. Now practically speaking, it might be more useful to say "arbitrarily" (since we are using the fiction of electrons having definite position) instead of "infinitely". DMacks (talk) 17:50, 18 February 2011 (UTC)[reply]
I understand that you can't know the position of an electron, but I thought the continuous probability function was for the location "on" the specific shell configuration, or at least that the outermost point of the outer shell was the limit of distance that it could be from the nucleus. Are you saying that the outermost point that an electron can be from the nucleus is infinity? — Preceding unsigned comment added by 165.212.189.187 (talk)
Yes, that is the mathematical result when you compute the probablity distribution for an isolated atom, in a Universe which contains just this one atom. This probablity distribution will have to be modified at distances where the electron feels the influence of another atom. Even in the idealised case, the probability drops very quickly, so even if it is finite even at a kilometer from the nucleus, it is very small indeed. --Wrongfilter (talk) 21:50, 18 February 2011 (UTC)[reply]
In relativistic quantum mechanics I think there is a limit given by the time scale of the atom's interaction with other atoms (times the speed of light). -- BenRG (talk) 23:35, 18 February 2011 (UTC)[reply]
Yes, in a Magnetar the shape of the atom is squeezed into a cylinder. Ariel. (talk) 22:27, 18 February 2011 (UTC)[reply]

You can also "encourage" the electron in a given valence shell to be at a greater distance (higher energy) from the atom without changing the quantum state. This happens in intermolecular interactions all the time - see London dispersion forces. SamuelRiv (talk) 21:38, 19 February 2011 (UTC)[reply]

What force is keeping the electron that is a kilometer away from the nucleus associated with that atom?

Cost of Driving

I'm trying to figure out the marginal cost of driving a car. It seems the total cost is a function of both mileage and time. Some costs are obviously related to the miles driven, such as gas, and others are obviously independent of mileage, such as the insurance costs. The cost that has me stumped is depreciation. The car loses value both as a function of time (the car will lose value over time whether or not you drive it) and mileage (two cars of identical age but with different mileage are worth different amounts). Does anyone know how to quantify this? anonymous6494 14:53, 18 February 2011 (UTC)[reply]

It's also dependent on how popular the particular model is on the second-hand market, which is to some extent influenced arbitrarily by fashion and therefore fluctuates over time - some models become more valuable with age, assuming no unusual deterioration in condition. In the UK, at least, cheap guidebook-type magazines are widely published and frequently updated listing averages of current values, based on recent sales, and there are on-line equivalents such as this.
Alternatively, an accountant valuing the depreciation of a company-owned car might just assign an arbitrary percentage-of-the-original-cost loss per year, such that in x years the value on the company's books will decline to zero, but as that linked article (which actually uses a vehicle as an example) details, there are alternative methods. 87.81.230.195 (talk) 15:12, 18 February 2011 (UTC)[reply]
I had looked at Kelley Blue Book's website to try various combinations of age and mileage but the resolution was not very good (a large change in mileage was needed to produce a change in value). The site you linked was similar but my car isn't listed so it may not be available in the UK (it may be there under a different model name). From an accounting perspective US tax law makes it clear how to depreciate a motor vehicle used for business, but depreciation doesn't accurately depict the actual value, only its value for tax purposes. I was hoping to determine how the actual value is affected by driving additional miles. anonymous6494 15:30, 18 February 2011 (UTC)[reply]
Glass's Guide gives the mileage adjustments for the retail and trade values of each model and age of car in the UK. The figure depends on the model and age of the vehicle. I don't have access to a copy to cite any figures, but last time I checked for my old car it was around 5p per mile. It will probable be double that for current models. Dbfirs 17:41, 18 February 2011 (UTC)[reply]
Note that your example of a cost which is independent of mileage isn't quite right. Insurance companies often ask how many miles you drive per year and figure that into the premium price. Also, if you drive more, an accident is more likely, and if this leads to a claim then your rates will go up again. Finally, if you drive more, tickets are more likely, again increasing your premiums. StuRat (talk) 18:39, 18 February 2011 (UTC)[reply]
Yes, good point. If you get quotes for different anticipated mileages, you can split insurance into fixed and variable elements, though I expect that most insurance companies will use a step function rather than continuously variable premium. Dbfirs 23:45, 18 February 2011 (UTC)[reply]
There are some basic rules in behavioral economics that, all variables regarding car make, model, condition, etc being equal, might give you some guidance on how much a car depreciates in people's heads. For example, let's quantify the idea of a new car losing half its value as soon as its driven out of the lot. This might seem an irrational cutoff, since the car's been test-driven plenty, often by people who don't know how to use the clutch properly. However, given the choice between a guy offering a discount of $100 on a $10k car (1%) after driving it for a week after buying it and deciding he didn't like it, versus buying a car "new" with no discount, even if the mileage were the same, most people would buy the "new" car. Why? Because it's "new", and 1% isn't a lot to spend (versus, say, for a "new" $300 TV versus a one-week used $200 TV, most would buy the used one at a 33% discount, even though the proper discount and marginal utility are about the same between a car and a TV). My suggestion is, if doing this informally, come up with a list of survey questions, such as "The car is new on the lot, but the same car with same mileage used for one week is for sale at __% off, which do you buy?", and refine the price as you ask more people - you probably only need to ask 5 or 8 people before getting a pretty good idea of how a car depreciates over each year for 10 years, as long as you don't ask them about multiple years in succession. And of course, your best source on behavior is yourself, as long as you answer yourself honestly. SamuelRiv (talk) 21:49, 19 February 2011 (UTC)[reply]
There are good reasons to avoid slightly used products. If the person who bought it quickly changed his mind, that implies that there was either something wrong with the car to begin with, or it was damaged in some way, either of which could be cost far more than $100 to fix. If, on the other hand, he decided to sell it after a few years, that might just be because he wants a new car, and doesn't imply that there's anything wrong with it. StuRat (talk) 00:20, 21 February 2011 (UTC)[reply]

Food spoilage - refreezing meat

One of the mantras of food safety is (so far as I'm aware) to never re-freeze meat products, presumably because the bacterial load increases dramatically during the second thawing cycle. Is there something intrinsic to the freeze-thaw cycle that favours undue bacterial proliferation? If I take any given two pounds of ground beef, freeze and thaw one of them twice over the course of two days, and just leave the other one in the fridge for two days, will there be any appreciable difference in the multiplier of bacteria count? I'm aware of the need for proper cooking, proper handling to avoid cross-contamination, &c, but can't find any biological basis for this particular "rule" of food in our articles. (FD: this pertains to a real-world discussion, but the discussion is with a medical doctor, so at least on this end any suggestion of medadvice would be met with peals of laughter - I'm asking about only the biology here) Franamax (talk) 16:01, 18 February 2011 (UTC)[reply]

I've honestly never heard of this particular "rule." The Food Safety and Inspection Service is the king of meat food safety here in the US and A, and their advice page on freezing doesn't say a thing about it. Even listeria doesn't grow at freezing temperatures since the water activity of ice is... marginal. Foods that are handled a lot (i.e. if you take a little bit off a hunk of ground beef six or seven times) are at greater risk just because every encounter adds a possibility of contamination, but washing your hands and other common sense stuff makes that risk marginal. To be honest, cooking it fully is the big answer to most bacterial issues, though S. aureus toxin is of course going to stick around even if the bug itself is gone. The quality of thawed and re-frozen meat might be... undesirable, of course. SDY (talk) 16:16, 18 February 2011 (UTC)[reply]
The American FSIS say on their website that it's ok.[3] But the Food and Nutrition Service say it's not.[4] The UK's Food Standards Agency says you can refreeze meat once cooked, not raw.[5] In the EU, the statement "Do not refreeze after defrosting" is mandated on all quick-frozen goods under Council Directive 89/108/EEC. 17:04, 18 February 2011 (UTC)
And The Straight Dope has a quick overview. Nanonic (talk) 17:12, 18 February 2011 (UTC)[reply]
All frozen food in the UK carries this instruction not to refreeze. Its serves as simplified guidance to hoi polloi who may not understand or follow more complex advice. Considering how dangerous food poisoning is, it seems a sensible approach. Much of the fresh unfrozen meat and seafood that you buy in supermarkets was stored in deep freezers, yet it has been thawed and often labelled as being suitable for freezing on the day of purchase. Chefs etc., on the other-hand, are schooled in the health and safety aspects of food storage and should know when its safe to refreeze. Their fridges and freezers must also have externally visible temp gauges and have fans to speed up cooling.--Aspro (talk) 17:19, 18 February 2011 (UTC)[reply]
It's not just the time of the freezing process, it's the total time of the meat spent at temperatures that are compatible with bacterial growth. If the meat spends four hours at susceptible temperature thawing, then two refreezing, then four hours thawing again, that's sort of like leaving it out for ten hours, and you may end with so much pathogen that even if cooking kills 99.9% of it you still have enough for an infectious dose. The "don't refreeze" isn't ridiculous in the context of cumulative time, but "don't leave it sitting at temperatures that grow bacteria" is really what they're talking about. SDY (talk) 17:55, 18 February 2011 (UTC)[reply]
I think the problem may not be with the bacteria themselves, but our perceptions and memories. Consider three paths frozen meat can take:
A) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you cook it and eat it. Perhaps you have a light exposure to bacterial toxins.
B) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you toss it. No exposure to bacterial toxins.
C) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you refreeze it to stop the bacterial growth. It then stays in the freezer for a year, and you forget that it was questionable when you refroze it. You now take it out, thaw it, and leave it in the fridge several more days, and it goes bad before you cook it and eat it. Heavy exposure to bacterial toxins.
This could be addressed by writing the history of how long the meat has been stored at various temps on a label on the meat, and doing some calculations to see if it's still safe, but most people wouldn't do that (perhaps a butcher dealing with sides of beef might). Another factor is if the meat is defrosted in warm water, which is a bacterium's dream. Doing this twice is far worse than once (if the bacteria grows to 10x the original count once, it would grow to 100x if this is done twice).
Perhaps in the future each slab of meat will come with a device that can tell you whether it's safe or not, by, say, changing the color on an indicator strip that darkens with time and temperature, paralleling the growth of bacteria. This could also be done with a reusable digital thermometer and clock combo. It would ideally be reset and then kept with the meat from the time of the slaughter until cooked and served. I picture them being returned to the butcher/store for refund at the next visit, and then returned to the slaughterhouse, sterilized, reset and reused. Somewhat less effective would be if the consumer took his own device with him to the store, put it with the meat as he put it in the basket, and reset the device then. StuRat (talk) 18:09, 18 February 2011 (UTC)[reply]
Thanks all for the info and the links. I read them and they seem to repeat the "mantra" without substantiation. However, I may have come up with the answer on my oen. When cells are frozen in uncontrolled situations, they have a tendency to lyse, I believe during tha freezing process but only made evient upon thawing. This is the difference between selecting only those bacteria competent to penetrate cell membranes and putting a come-one-come-all sign out on the street, isn't it? Is there a reliable source to confirm that? Franamax (talk) 18:35, 18 February 2011 (UTC)[reply]
For food safety, bacterial growth is primarily about four things: nutrients, water, time and temperature (there are other potential complications like pH, but they're not a factor with meat). If the lysed cells provide better nutrition for the bacteria (plausible) or more available water to support growth that might help, but the non-refreezing argument appears to be mostly targeted towards time and temperature. SDY (talk) 18:56, 18 February 2011 (UTC)[reply]
It's going to be difficult to find a reliable source that will tell you its okay to refreeze meat. As you've no doubt gathered from the responses above, there is at least a non-zero risk every time product is brought to room temperature (i.e. the danger zone). If "common wisdom" says that you should only freeze meat once, any source advocating multiple freezings would leave themselves legally/morally liable for any misadventure that occurred as a result - and for what? I can't think of a particular person or group that would have anything to gain from people refreezing their goods (other than maybe the manufacturers of freezer bags). And when you factor in how poorly most people understand the vectors of food-borne illness plus the ill-conceived methods many people employ to defrost food (e.g. leave it on the table all day for supper that night), you start to think that giving folks the dumbed-down (but safer) advice isn't such a bad thing. Matt Deres (talk) 19:20, 18 February 2011 (UTC)[reply]
I wasn't aware that avoiding refreezing is a safety matter. I thought the point is that when you freeze and thaw meat multiple times, the texture turns more and more mushy. Looie496 (talk) 20:45, 18 February 2011 (UTC)[reply]
Well, it's a safety matter in the sense that bacterial growth is greatest when food is in the danger zone. When the food is sitting in your freezer, bacterial growth is greatly restricted due to the water being unavailable for use. When the food is sitting in your oven, the internal temperature of the food is (hopefully) being raised to the point where the bacteria are killed off. But room temperature is problematic and that's the temperature many people defrost their meat at. For example, when I was a kid, my mom thought nothing of leaving a package of ground beef on the counter for several hours so she could use it for supper. Likewise with cuts of chicken, pork, and so on. Do that once and you might get away with it, but each time you do it, you roll those dice again. For the record, the best way to defrost a chunk of meat is to immerse it in cold water that is kept in motion, for example by putting it in a bowl filled with water and with the tap set to drizzle cold water in, keeping a current moving. Even big chunks of roast defrost quickly - and without the partially cooked bits you get from using a microwave.
No question that the texture gets worse - and not just with meats. Stuff like strawberries will literally turn to mush after just one or two re-freezings. Matt Deres (talk) 21:27, 18 February 2011 (UTC)[reply]
It sounds like your mother may have known a lot about how to avoid food poisoning. Before refrigeration or ice boxes came into being, people knew what was good and bad practice. Even today, in places like Africa, one can see raw red meat covered black with flies and on the point of going putrid, but cooked -its OK. Chicken there however, is taken home still flapping and vocally protesting its innocence. Also, I have noticed that in recent years red meat in our own supermarkets is not so well bled as it used to be. This shortens the time it can remain safely uncooked. Well bled beef tastes much better if it is matured for a few weeks before cooking. The Inuit actually eat putrid meat as a delicacy... but I will leave that until another day. --Aspro (talk) 22:07, 18 February 2011 (UTC)[reply]
Something that hasn't been mentioned yet is that the bacterial toxins have more time to build up with repeated thawing and refreezing, and they are not removed by cooking. 92.29.119.194 (talk) 00:19, 19 February 2011 (UTC)[reply]
That might be because some toxins (most) are heat labile.--Aspro (talk) 00:27, 19 February 2011 (UTC)[reply]
I hinted at this above, actually. S. aureus toxin is quite stable at normal cooking temperatures. As toxins go, it's not too bad in that it won't kill you, I don't think anyone would enjoy recreational use. SDY (talk) 00:31, 19 February 2011 (UTC)[reply]

Do other apes have problems giving birth?

Does any other species have so much trouble? 66.108.223.179 (talk) 16:03, 18 February 2011 (UTC)[reply]

I don't think so. It's how our legs are positioned to allow us to walk upright full-time that causes the problem. Other apes have legs positioned more to the side, providing more room for child-birth (ape-birth ?).StuRat (talk) 17:47, 18 February 2011 (UTC)[reply]
Obstetrical Dilemma is relevant but a little vague about when exactly humans' ancestors developed bipedalism. Comet Tuttle (talk) 18:09, 18 February 2011 (UTC)[reply]
My understanding is that the most important factor is the size of our heads at birth. Other apes have much smaller brains relative to body size, and correspondingly smaller heads. Looie496 (talk) 18:59, 18 February 2011 (UTC)[reply]
A detail: Exactly why the human (female) hip area isn't wider. It's to do with walking upright, but the exact reason, as far as I've gathered, is that a wider hip increases the risk of tearing the leg muscle. EverGreg (talk) 20:08, 18 February 2011 (UTC)[reply]
I suspect that it has to do with speed and efficiency when running. Girls can run as fast as boys, until puberty hits and their hips widen. Then the swinging of the hips from side to side seems to slow them down considerably. If you look at female Olympic runners, they tend to have rather narrow hips. StuRat (talk) 21:45, 20 February 2011 (UTC)[reply]
It was confirmed a few days ago that Lucy's species walked upright as humans do 3 million years ago. 66.108.223.179 (talk) 05:28, 19 February 2011 (UTC)[reply]
Then I suspect that they probably did have more trouble giving birth than most other apes, although perhaps not as much as modern humans, if their heads were smaller, at birth, relative to the size of adult females, than ours. StuRat (talk) 21:42, 20 February 2011 (UTC)[reply]

Virtual black box

Aircraft black boxes provide valuable crash info, but can't always be recovered. Has anyone considered the option of a virtual black box ? It would work like this:

1) During operation, airplanes would broadcast their current black box info, at a designated frequency, to the nearest tower. This signal would include identification info for the flight and airplane.

2) A computer at the tower would then store this info.

3) We could stop here, and have investigators contact the various towers near the flight path to retrieve the info after a crash. Or, the towers could report the info, in turn, via the Internet, to a central site where the records for each flight are accumulated and available for real-time analysis on all flights. This could be useful, say, to identify a systemic problem like wind sheer or multiple hijackings, while the info could still be used to prevent further problems with other flights.

I envision this system being in addition to the current black boxes. So, has anyone proposed this ? StuRat (talk) 18:27, 18 February 2011 (UTC)[reply]

Googling for "virtual black box" aircraft finds it's idea that's been around for quite some time. DMacks (talk) 18:32, 18 February 2011 (UTC)[reply]
And has it gotten any traction ? StuRat (talk) 18:34, 18 February 2011 (UTC)[reply]
(ecx2) Here's a proposal that was published in IEEE Spectrum. You'd still need the black-box flight data recorder to deal with transoceanic flights, over-the-Amazon flights, etc. Comet Tuttle (talk) 18:35, 18 February 2011 (UTC)[reply]
Aircraft Communications Addressing and Reporting System can be used to download some aircraft data to ground stations, with the dissapearance of Air France Flight 447 it has provided some data to the investigation as the aircraft and the aircraft data recorders have not been found. MilborneOne (talk) 18:43, 18 February 2011 (UTC)[reply]

Commercial airplane satellite navigation

There is the problem that each tower can only track airplanes within a limited range, due to their radar being blocked by the curvature of the Earth. For this reason, I believe the US military has gone with satellite navigation, so the position of each plane can be tracked by satellite and reported back to the various landing fields. Is this correct ? Are there plans to do this for commercial flights, as well ? StuRat (talk) 18:33, 18 February 2011 (UTC)[reply]

Unless the military changed drastically in the last 20 years, there is a hell of a lot of ground-based radar data being used. My job was radar controller maintenance. The curvature of the Earth is handled by elevation. For example, I had to go to Norway (in February) to work on a radar positioned near the top of a mountain. Sure - a satellite may be able to monitor traffic around northern Norway, but having ground-based radar with a satellite uplink works better. -- kainaw 18:45, 18 February 2011 (UTC)[reply]
I believe the military use of satellites has changed drastically in the last 20 years, and does combine radar tower info with satellite info, currently. My question is if commercial aviation has also started to use satellites in this way. StuRat (talk) 19:08, 18 February 2011 (UTC)[reply]
(ec) The Tower, which I presume you mean the local air traffic control at an airfield, they are only interested in aircraft they can see out of the window and within ten or twenty miles from an airfield so radar being blocked by curvature is not normally a problem. The air traffic control centres need a bigger picture of what is going on and they can are normally be fed with radar images from different places some may be hundreds of miles from where the operator is stationed, think of it like an internet of radar images linked to different ATC towers and control centres. As to using satellites - have a read of Automatic dependent surveillance-broadcast not so much as the satellites tracking the aircraft they just use a system like the GPS/Sat Nav in you car and broadcast the position by radio. MilborneOne (talk) 18:55, 18 February 2011 (UTC)[reply]
But how do the air traffic control centers get their data ? It can't be just the sum of all radar returns, since they won't work on flights across oceans. I suspect that they rely on info broadcast from the airplanes, but that could be missing or unreliable, in the case of an electrical failure or intentional incorrect transmissions, say from a terrorist-controlled airplane. StuRat (talk) 19:04, 18 February 2011 (UTC)[reply]
My understanding is that oceanic flights are not tracked by ATC in real time. They are assigned a route when they leave one continent's coverage area, and are supposed to stick to it as best they can, or deport deviations by HF radio. (See procedural control]). TCAS still works to prevent mid-air collisions over the oceans; it does not depend on ground support (but uses some of the same onboard equipment as ATC uses for radar tracking). –Henning Makholm (talk) 16:08, 19 February 2011 (UTC)[reply]
Most civilian aircraft in the United States (and much of the rest of the world) are tracked using Airport Surveillance RADAR. The newest model is ASR-11 and most civilian units are commercially sold by Raytheon Surveillance Systems, and then operated by local air traffic controllers and airports under contract to the FAA. The system works pretty well, and coverage is pretty dense, and the technology fits the organizational model currently in use to manage air traffic (which is as much a procedural challenge as a technology problem). Converting to a satellite-based system is surely possible - it's just expensive and unnecessary. The FAA's current "next big thing" for tracking civil air traffic is digital ASR: see this FAA press-release website. (After edit-conflict): civil RADARs are far more vulnerable to "spoofing" than military systems; they rely on squawk codes that aircraft voluntarily reports for such important data as elevation and aircraft status (plus "unimportant" metadata such as airline and flight number). In any case, this is not considered a serious threat to air defense, or else civil RADARs would be replaced by military-capable RADARs that have electronic countermeasures to make squawk spoofing and airborne electronic RADAR evasion more difficult. Finally, I refer you to Lincoln Laboratory Air Traffic Control Systems, an FFRDC funded by the U.S. Department of Defense and the FAA; this "think-tank" of aviation, electronics, and policy experts consider all of the sorts of issues that you are bringing up, and evaluate strategic technology and policy requirements for the national air traffic control network. In the same way that we have an article on everything, the Feds have an agency for everything. Nimur (talk) 19:16, 18 February 2011 (UTC)[reply]
That was recently on the news. Dauto (talk) 02:37, 19 February 2011 (UTC)[reply]

Mental illness and responsibility of other for own thoughts and feeling

Is the exaggerated and persistent belief that others are responsibility for your own thoughts and feeling a common component of some mental illnesses? — Preceding unsigned comment added by Wikiweek (talkcontribs)

I will provide almost the exact same answer as I provided to your earlier question. From one single "possible symptom", it is not possible for us, or even a trained professional, to make a judgement call about whether a person has a mental health issue. Correctly interpreting the results of a psychological screening or therapy interview is very difficult. That is why a psychiatrist is a trained medical doctor who has undergone several years of schooling, technical training, apprenticeship, and residency. A psychiatrist is able to meaningfully interpret the entirety of a person's circumstance and situation, not just the response to a single question. "Short questionnaires" should not be considered conclusive in any way; at best, they may help guide a trained professional by providing a wide set of indicators; but they are not a substitute for professional diagnosis. The reference desk will not provide medical advice, including psychiatric advice; as part of this, we will not be able to provide a concrete answer to your question, because it would constitute a diagnosis and we will not perform psychiatric diagnosis here. You can read our articles on mental health, and draw your own conclusions; and if you need assistance with diagnosis, you should seek a trained and licensed professional. Nimur (talk) 20:56, 18 February 2011 (UTC)[reply]
We can't diagnose you here, nor can we offer you any form of treatment. However, that sounds similar to "institutional think"....Then again, mental illness itself may be a delusion per Thomas Szasz's Ideology and Insanity: Essays on the Psychiatric Dehumanization of Man.Smallman12q (talk) 21:41, 18 February 2011 (UTC)[reply]
Actually there is a pretty straightforward answer. The belief that thoughts and sensations are being inserted into your head by external entities is a common feature of paranoid schizophrenia. Note that this doesn't just mean holding others responsible, it means believing that some entity is physically broadcasting thoughts or voices into your head. Merely blaming others for one's problems is not particularly meaningful. (Note: I don't see the question as asking for a diagnosis.) Looie496 (talk) 22:25, 18 February 2011 (UTC)[reply]

Tappan zee bridge cost

In this WSJ article on the Tappan Zee Bridge, it states that the bridge cost $640 million to build in today's dollars in the 1950's. The article also states "And the state has a team of financiers scrambling to find the $8.3 billion needed to replace it as a car-only structure without adding bus lanes or a train line and more than $16 billion with them."

Why would it cost more than 10x to build replace the bridge today than it did in 1950 (or did the WSJ do its math wrong?)Smallman12q (talk) 21:47, 18 February 2011 (UTC)[reply]

The cost of building materials and the cost of laborers has gone up faster than inflation. Converting to "today's dollars" adjusts for inflation, but inflation tracks money supply relative to number of people. The cost to build something does not necessarily match. Another expense is dealing with rerouting all the traffic and people in the area of the construction. Especially in NY this could be a significant expense. Ariel. (talk) 22:33, 18 February 2011 (UTC)[reply]
It's likely the new bridge would be designed to carry more traffic than the old bridge; a bigger bridge costs more money. And while I'm speculating, it may also be the case that a new bridge would need to be built to stricter standards than the old bridge was. —Bkell (talk) 00:54, 19 February 2011 (UTC)[reply]
Adding weight to that interpretation is a passage from our article: "it was constructed during material shortages during the Korean War and designed to last only 50 years.... The collapse of the I-35W Mississippi River bridge in Minnesota on August 1, 2007 has renewed concerns about the bridge's structural integrity." TenOfAllTrades(talk) 14:29, 19 February 2011 (UTC)[reply]
In addition to the original very lightweight design I noted above, the proposed replacement has a significantly expanded deck. The original bridge has seven lanes, and connects to at least eight lanes of highway at each end. The further lack of shoulders means that access for emergency vehicles can be an issue, and that any traffic problems rapidly snarl up flow along the entire bridge. The proposed design has ten lanes of automobile traffic, plus shoulders, plus dedicated space for pedestrian and bicycle traffic ([6]). TenOfAllTrades(talk) 14:48, 19 February 2011 (UTC)[reply]
I don't know if it could be done here, but one cost saving approach to expand bridge capacity is to leave the original in place (and continue to do maintenance on it), while building another, parallel to it. One bridge is often used for traffic headed in one direction, with the other used for the other direction. Some advantages:
1) Minimal disruption of traffic during construction of the 2nd bridge. The case where the old bridge is kept intact until the new one is available, then demolished, similarly causes minimal traffic disruption. The case where the original must first be demolished so that a replacement can be built in the same location, however, can cause massive traffic disruption for years.
2) Once completed, the ability to route all traffic over a single bridge, during maintenance, or when accidents make one impassable.
3) If this was a toll bridge, it may be possible to charge double tolls in one direction, and thus eliminate the need to construct toll booths on the new bridge or bridge approaches. StuRat (talk) 21:28, 20 February 2011 (UTC)[reply]

Solar mass loss and orbits

How did scientists estimate Sun to last for about 4.5 billion years? The far I know is a loss of about 1% of solar mass through radiation which if calculated linearly should result in about 79 billion years given the current hydrogen and helium figures. On the other hand, what was the expected Earth's orbit when it began to form?--Email4mobile (talk) 22:10, 18 February 2011 (UTC)[reply]

It's not linear in the slightest (over the entire life). See Stellar evolution for some discussions, but no numbers that I could see. In the current stage (of our sun) it's linear, but it eventually reaches an exhaustion point after which it changes dramatically. Even though it has tons of mass left it's not able to fuse it like it did before. And finally even if it was linear not all the mass gets converted to energy, only the mass deficit between hydrogen and helium gets converted - but the mass of the helium stays (ignoring the fact that it will fuse helium too). Rerun your calculations using the mass deficit instead of the total mass and see what you get. Ariel. (talk) 22:39, 18 February 2011 (UTC)[reply]
A not-too-technical overview is provided here, at "Ask An Astronomer" from Cornell University. The lifetime of the sun is "estimated by assuming that the sun will "die" when it runs out of energy to keep it shining." The amount of available energy is estimated from knowledge of nuclear fusion; and the "energy consumption rate" is estimated from observations of the sun's energy output (how "brightly" it is shining). Nimur (talk) 22:52, 18 February 2011 (UTC)[reply]

Note though that the dating of the origin of the Sun doesn't rely on these things. The most precise figures are based on measurements of decay of radioactive elements in meteorites, which almost uniformly date to 4.5-4.6 billion years old, and which are believed on the basis of theoretical considerations to have formed within the first few million years of the solar system. Looie496 (talk) 23:53, 18 February 2011 (UTC)[reply]

The 79 billion figure likely assumes all the hydrogen will eventually get burned into helium. That assumption is incorrect. Only the hydrogen at the core of the sun will be burned. Dauto (talk) 00:58, 19 February 2011 (UTC)[reply]
And during those 4.6bn years, the Sun will continue burning more-or-less the same as it does now. One quite-interesting problem of the late-19th century: by this time, Darwin was roughly accepted by the scientific community and gradualist geology suggested the Earth was 4bn years old. However, astrophysicists disagreed in theory, and came up with a maximum age of the Earth at 100,000 years. This is by no means Young-Earth Creationism, but the controversy was not without its theological side. The problem was that until the 1920s, everything in the universe was believed to revolve around the brilliant science of statistical mechanics, which can be derived entirely from a priori principles - that is, you don't even need to know what warmth looks like to know what temperature is. So by an ordinary thermodynamic engine with the size and temperature and fuel mass of the Sun, it could not even last a million years before burning out by even 100%-efficient engines. Then we discovered radioactive decay, the quantum mysteries of matter, and the nature of the atomic nucleus, and suddenly we had an entirely new engine in the universe: the nuclear forces. (There's a quote, probably by someone on the Bethe-Critchfield Nobel team for making fusion, when he's out with a companion at night: She said "Look at how pretty the stars shine!" He said "Yes, and right now I am the only man in the world who knows why they shine." According to the story, this was not enough to get her into bed. Amateur.) SamuelRiv (talk) 22:03, 19 February 2011 (UTC)[reply]

So, where can I get a reliable source for this kind of calculations or estimations? (I can see there is no interest in the Earth's orbit problem ;) ).--Email4mobile (talk) 18:21, 20 February 2011 (UTC)[reply]

Effects to an Earth Human upon entering a parallel space-time continuum via a higher space-time continuum.

If an Earth Human would step through an inter-dimensional portal where the new space-time continuum was controlled by a Time "arrow" (as posited by the late Dr. Hawking, in his most famous book) that was pointing in the reverse direction to the Earth's time arrow, would the Earth Human immediately meld with the new time arrow, atom per atom, so to speak; in which case he would continue to age, even when he returned, he would return more aged, but to an earlier Earth time period?

K. McIntire-Tregoning, concert composer, February 18, 2011

(NOTE: copyrights suspended for this transmission with Wikipedia.) 189.173.210.245 (talk) 23:25, 18 February 2011 (UTC)[reply]

Your question has no meaningful physics-based answer, because "inter-dimensional portal" and "new space-time continuum" do not have well-formed, meaningful, physics-based interpretations. Sorry. Nimur (talk) 23:29, 18 February 2011 (UTC)[reply]
Mind me asking who is that late Dr Hawking? Stephen Hawking is still alive. Dauto (talk) 02:29, 19 February 2011 (UTC)[reply]
And Stephen Hawking is normally referred to as Prof. Hawking, not Dr. Hawking. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]
Anyways, any, any, ANY so called "time machine", i.e. a device/portal/spacetime construction/anything which takes you backwards in time will be destroyed by vacuum fluctuations within 10-43 seconds (The Planck-Wheeler time). So no, even if the question had some meaning, it wouldn't work. ManishEarthTalkStalk 10:22, 19 February 2011 (UTC)[reply]
Could you provide some reference for that? Problems with time travel are usually described in terms of the Einstein Field Equations in General Relavity (and any solutions thereto requiring regions with negative energy density, which seem to be impossible). I've never heard of quantum mechanics and the Planck time being involved. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]
If doesn't really make sense to talk about the arrow of time being in a different direction in one universe than in another. How would you compare them? Discussions about the arrow of time usually revolve around making sense of the relationships between different arrows of time within one universe: 1) We remember the past but we don't remember the future (the psychological arrow of time), 2) the universe is denser in the past than the future (the cosmological arrow of time) and 3) entropy (disorder) is lower in the past than in the future (the thermodynamic arrow of time). Some people describe other arrows of time, but they can usually be shown very easy to be equivalent to one of those three. What Stephen Hawking worked on (among many other things) was trying to work out whether it is concidence that, for example, we remember times of lower entropy and not ones of higher entropy or if the universe has to work that way (in this case, he realised that the act of storing memories in the brain by necessity increases entropy, so the arrows have to point in the relative directions that they do). Trying to compare arrows of time between universes makes no sense. The entire concept of different universes is difficult enough to make sense of. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]


February 19

About how efficient would this be in terms of transferring chemical energy in the explosive to kinetic energy imparted to the penetrator? ScienceApe (talk) 03:18, 19 February 2011 (UTC)[reply]

The chemical energy is converted into 1) kinetic energy (ie. that due to the projectile's initial velocity after the explosion) and 2) the energy necessary to "re-shape" the material of which the projectile consists. How much energy is required for that second process can be complicated, and I haven't been able to find a very good generally-applicable answer for you, but see eg. Plasticity (physics). You may also be interested in our Shaped charge article. WikiDao 14:32, 19 February 2011 (UTC)[reply]

Relativity

Is it possible for a bubble of spacetime to exist within this universe under the following conditions:

  • There are two observers, observer A is located within this universe, B is located in the bubble.
  • A and B agrees on the temporal vector however, they do not agree on the dimension vectors - each observer claims that the other's space is inversed, left is right, up is down.

Plasmic Physics (talk) 04:12, 19 February 2011 (UTC)[reply]

Your question as it stands isn't well defined. People can "claim" whatever they want; you need to specify some kind of physical reversal effect that everyone agrees took place. For example, is there a spacetime manifold in which you can leave on a voyage and come back mirror-reflected (compared to someone who stayed home)? The answer to that is yes, since space could be shaped like the 3D analogue of a Klein bottle. But this is not really a relativistic question. General relativity doesn't say anything about what spacetime shapes are allowed. The Standard Model of particle physics is not mirror-symmetric, which suggests that such spacetimes aren't possible. -- BenRG (talk) 04:59, 19 February 2011 (UTC)[reply]

I'm not attempting to discuss the topology of the universe, only of a completely enclosed domain within the universe. The surface of the bubble is such that both observers may freely transit the bubble. There is no way for either observer to determine their absolute dimentional directionality, as there is no absolute reference frame. All they can say, is that the opposite observer's dimensionality is reversed. Plasmic Physics (talk) 05:29, 19 February 2011 (UTC)[reply]

(EC) It's not clear what you mean by "a bubble of spacetime ... within this universe". Our universe by definition is a single connected spacetime manifold; see Universe#Definition as connected space-time. If there was a bubble of spacetime that's different from our normal universe, it would be a different, disconnected universe. The laws of physics in our universe have CPT symmetry. Parity alone is not conserved, as weak interactions violate parity, so a left vs. right transformation alone, or an up vs. down transformation alone, makes a difference as to our laws of physics. Ideas have been floated that perhaps there exist other universes that have different laws of physics, or at least have different values for physical constants, but such ideas are of course highly speculative, and are probably unfalsifiable. Red Act (talk) 05:34, 19 February 2011 (UTC)[reply]
I still don't think the question makes sense. In a lot of relativity textbooks there are poor imitations of Einstein's thought experiments with wording like "observer A claims [something], while observer B claims [something different]". The physical content of these examples is no different from "observer A claims that she is larger than observer B, while observer B claims that it is he who is larger than observer A." The description of the relative sizes of objects in these people's visual fields is correct, but the use of the word "claims" is inappropriate, since it suggests that these observers reject the reality of the other's perceptions and are unable to understand the difference between the perception and the objective world. There are people like that in real life, admittedly, but I think the people in thought experiments should be competent scientists. If you're using the word "claims" in that way, then what you're really asking is whether it's possible for two people to each see the other as mirror-imaged or as upside-down. The answer is yes, since you can just place a mirror or lens at an appropriate spot between them. I know this isn't a very satisfying answer, but it may be the only correct answer. -- BenRG (talk) 06:47, 19 February 2011 (UTC)[reply]

That is exactly what I'm asking, but I'm asking whether it is at all possible without the use of a lense or mirror like object, a situation where the observation of parity reversal is independent of the relative location of either observer within their dommain with respect to the bubble's surface. The effect is only observed for lenses within a limited range for a lense or mirror. Plasmic Physics (talk) 08:16, 19 February 2011 (UTC)[reply]

Oh, I get it. I'm sorry about the confusion. As a question in flat-space optics I think the answer is provably no, but I'm not sure. As a question in general relativity I'm not sure it makes sense, because whatever gravitational effects apply to light crossing the surface will also apply to matter crossing it. Maybe you could get around that with some crazy spacetime where lightlike geodesics get flipped around and timelike geodesics don't, but it seems dubious. -- BenRG (talk) 09:43, 19 February 2011 (UTC)[reply]
Maybe if the bubble was a spherical "trench" in spacetime (circular trench in 2D spacetime)? It could quite easily refract the rays of light.

So, the solution is purely an optical manipulation. A circular trench wouldn't cause these effects, the light would be refracted around the bubble turning it invisible, rather like a metamaterial. Plasmic Physics (talk) 14:04, 19 February 2011 (UTC)[reply]

You could twist spacetime, like it gets twisted around a spinning black hole, but you'd need a lot of twist. ManishEarthTalkStalk 16:10, 19 February 2011 (UTC)[reply]

It's not clear what the exact condition you're looking for. If you place two persons in Quito and Singapore, they will disagree about the up/down and east/west directions, but still agree about north and south. If you have two observers disagreeing about the signs of all three spatial dimensions, their disagreement will have to be purely conventional in the sense that you cannot turn one's experience continuously into the other's without passing through a degenerate state. (The transformation from one's coordinates to the other's will initially have negative determinant, and the determinant cannot pass through 0 while still denoting something physically meaningful).Henning Makholm (talk) 17:56, 19 February 2011 (UTC)[reply]

And that discontinuity seems exactly what is implied by "bubble". I'm picturing, in this discontinuous case, a domain wall separating two independently-crystallized bubble-universes (a consequence of, say, a chaotic inflation scenario), though I could also see it as a kind of wormhole effect where the twist happens when you tunnel from one end of the universe to another. I'm reminded of the story in Sphereland (a sequel to the classic Flatland) where these flat dog-shapes were usually born right-chiral ( /\--/\o ) but there exist rare left-chiral dogs ( 0/\--/\ which can be rotated in Flatland to \/--\/o , distinct from above). The three-dimensional visitor proves the existence of his dimension by rotating all of this young girl's right-handed dogs into left-handed dogs, much to her delight. If a wormhole tunneled through hyperspace to open in space on another end, perhaps you can go through and end up inside-out directionally (and physiologically)? Time would still run forward in this case. SamuelRiv (talk) 22:20, 19 February 2011 (UTC)[reply]
On further thought, I'll retract my claim about continuous transformations. To the extent that general relativity allows wormhole solutions, it most will certainly also allow globally non-orientable solutions where passing through the wormhole causes you to appear in the original universe with the opposite chirality. Whether such a wormhole can ever form from an orientable initial state is a different question (but perhaps not much harder than making wormholes in the first place, which is an iffy proposition at best), and particle physics (not the best friend of GR) would probably imply that the wormhole also exchanged matter with antimatter.
However, this does not sound very much like a "bubble" to me, so I'm still at a loss regarding what the OP actually meant here. If there is an actual boundary (at least a coordinate singularity if the metric becomes degenerate) between the inside and outside of the bubble, it is not clear how one would even compare the chirality inside and outside. –Henning Makholm (talk) 23:04, 19 February 2011 (UTC)[reply]

By "bubble", I meant a domain with a width, height, and depth, with an actual boundry defined by the enclosing surface. Plasmic Physics (talk) 00:46, 20 February 2011 (UTC)[reply]

But which physical significance does that boundary have? I see now that you say that your two disagreeing observers can cross the boundary freely. But still they disagree? What does their disagreement have to do with the bubble, then? And how is it different (observationally or conceptually) from simply saying that they disagree about what the words "left" and "right" mean? –Henning Makholm (talk) 01:41, 20 February 2011 (UTC)[reply]
For a wormhole to do such a chirality flip, it'd have to have a singularity in its throat. Basically, it should be shaped like a twisted bowtie. I don't know if that's possible...
No, you just take an ordinary orientable wormhole, cut through the throat, and glue the pieces together with one side mirrored, like a Klein bottle with an extra dimension. If the wormhole is sufficiently symmetric, the result will still satisfy the curvature equations at the gluing surface. (The point of using a wormhole is that I think this trick requires a spacetime that isn't simply connected). –Henning Makholm (talk) 16:38, 20 February 2011 (UTC)[reply]

They only disagree when they are on opposing sides of the boundry. I'm not sure what you mean by the difference. Plasmic Physics (talk) 07:14, 20 February 2011 (UTC)[reply]

If they are not in the same place, then how do they compare their orientations? –Henning Makholm (talk) 16:34, 20 February 2011 (UTC)[reply]

As long as they face each other they will always have a clear line of sight. Plasmic Physics (talk) 22:50, 20 February 2011 (UTC)[reply]

Henning, doesn't the existence of a wormhole in the first place imply space is not simply connected? SamuelRiv (talk) 18:38, 21 February 2011 (UTC)[reply]

Carbon black

wtf is carbon black — Preceding unsigned comment added by Tomjohnson357 (talkcontribs) 05:15, 19 February 2011

Carbon black: "Carbon black is a material produced by the incomplete combustion of heavy petroleum products such as FCC tar, coal tar, ethylene cracking tar, and a small amount from vegetable oil....used as a pigment and reinforcement in rubber and plastic products" HiLo48 (talk) 05:25, 19 February 2011 (UTC)[reply]
wtf should be read as "what", any further discussion here is misplaced, take it to the talk page
The original ("informal") format of this question was restored, see the discussion and the relevant edit diff. Nimur (talk) 19:00, 19 February 2011 (UTC) [reply]
I don't understand the question. Please explain for our audience what "wtf" stands for. ←Baseball Bugs What's up, Doc? carrots→ 07:18, 20 February 2011 (UTC)[reply]
How could you tell it was a question? HiLo48 (talk) 07:25, 20 February 2011 (UTC)[reply]
You're right, it could also be a statement of fact. Thanks for leaving the non-native-English speaker baffled by insisting on the original wording. ←Baseball Bugs What's up, Doc? carrots→ 07:27, 20 February 2011 (UTC)[reply]
WTF, first definition. Nanonic (talk) 07:28, 20 February 2011 (UTC)[reply]
So, what has sexual intercourse got to do with carbon black? ←Baseball Bugs What's up, Doc? carrots→ 07:33, 20 February 2011 (UTC)[reply]
Probably a lot if we're talking about reinforcement of rubber and plastic products. Nanonic (talk) 07:35, 20 February 2011 (UTC)[reply]
The original "wtf" was the kind of wording a child would use. Other editors are constantly yelping about how this page is intended to help not just the OP, but anyone else who might come along. Changing "wtf" to "what" was directed to that avowed purpose. Insisting on keeping it as "wtf" was not. ←Baseball Bugs What's up, Doc? carrots→ 07:44, 20 February 2011 (UTC)[reply]

height (and body size, really) variance in animals besides humans -- or the lack thereof?

The article human height places responsibility for the great variability in height across Homo sapiens largely in the hands of health and genetics. Now, I am a birdwatcher, and it occurred to me the other day that I could not recall ever seeing a crow or pigeon that was a head above his peers. Observing the Eurasian Tree Sparrow, of which we have many in China, one can note slight variations in body weight (particularly towards the end of winter) but again - no feathered Yao Ming or I suppose Deng Xiaoping stands out among the crowd. Are humans unusual among animals with respect to their readily observable variance in body length? I am excluding, of course, those species which never cease to grow - we must confine ourselves to those that reach a defined adult form and cease growing. The Masked Booby (talk) 09:33, 19 February 2011 (UTC)[reply]

I thought in many mammals where size advantage in competition for a mate knocked up against genetic disadvantages of being too big there was reasonable variability. Certainly there seems to be size difference between adult male deer, and between adult hares and adult rabbits. Presumably the rest of the animal kingdom too but I can only speak for the ones I see often. --BozMo talk 10:06, 19 February 2011 (UTC)[reply]
Consider the domestic dog, which ranges in size from the chihuahua to the Alaskan Malamute or Irish Wolfhound. This variance is the product of years of selective breeding. This, I think, is the key to your question: among domestic breeds, there is a great variation in size, but their wild equivalents are all pretty much the same. --TammyMoet (talk) 10:12, 19 February 2011 (UTC)[reply]
Comparing dog breeds is going to be misleading as the dogs were essentially genetically engineered (the old fashioned way) to show the characteristics of the various breeds. Googlemeister (talk) 19:34, 22 February 2011 (UTC)[reply]
One major difference is that humans are "generalists", while most species are "specialists". That is, most species occupy a specific niche and location, and are customized for those only. Humans, on the other hand, occupy many locations and niches, and so were customized differently for each. I suspect that if you find a single species which is as widely spread as humans, then it will also have as wide a variety in physical attributes by population. However, humans have been moving around, out of our natural environments, for centuries now, so you will now find people with attributes designed for the poles living near the equator and vice-versa. StuRat (talk) 21:14, 20 February 2011 (UTC)[reply]

Dog mortality and longevity

Looking at the links above relating to the longevity of various dog breeds, I was surprised that they may on average only live 6-7 years and then die of cancer. 1) Why do dogs get cancer so early compared with humans? 2) Which dog breeds live longest, and least? 3) Is there any rule which predicts dog longevity from the breed size, etc? Thanks 92.15.16.146 (talk) 13:09, 19 February 2011 (UTC)[reply]

WP:OR generality here but small dogs often live longer than large dogs. Oh, and if you want a source with some data, here ya go. Dismas|(talk) 15:03, 19 February 2011 (UTC)[reply]
This is an area where numbers are varying rapidly due to social changes, and not evenly. People in advanced western nations these days seek more veterinary help for their pets than they would have 50 years ago. This makes a big difference. Some more very personal OR: breeds like Beagles, which are prone to escaping and demonstrating little road sense, are being penned and restrained more successfully these days, meaning that they are are overcoming their genetic predisposition to being run over. So that breed is experiencing a bigger improvement in longevity than some others. HiLo48 (talk) 15:21, 19 February 2011 (UTC)[reply]
OR here, but mongrels seem to live much longer and healthier lives than highly bred animals. DuncanHill (talk) 20:14, 19 February 2011 (UTC)[reply]
There are definitely certain breeds of dogs that are known to have lots of problems that can be mitigated by crossing breads (i.e. making Mutts). Cocker Spaniels often get Hip dysplasia, for example. See Purebred_(dog)#Health_issues. The American Kennel Club has gotten a bit of flak about this, in that they don't permit member clubs to maintain health standards in addition to the appearance standards imposed by the AKC. Buddy431 (talk) 23:14, 19 February 2011 (UTC)[reply]

Milk of Magnesia/Mike and Ikes

I recently discovered (about 5 minutes ago) that the candy Mike and Ikes contain magnesium hydroxide in trace quantities. Isn't that the main component of milk of magnesia, an antacid/laxative? Why is it there? Finalius (Say what?) 13:33, 19 February 2011 (UTC)[reply]

I would imagine that the concentration in the candy will be very low. Magnesium hydroxide can fill a number of roles in foods and supplements, including "Drying Agent, pH Buffer, Antacid, Color Retention Agent". I suspect that its purpose here is to act as a drying agent, to keep the individual candies from getting tacky and sticking together. TenOfAllTrades(talk) 15:05, 19 February 2011 (UTC)[reply]
(ec)According to WHO/FAO, it's an additive used as an "Acidity regulator [and] Colour retention agent".[7] As with many, many chemicals, a megadose can have very different effects than a small amount mixed with some other material. Also, we don't know (and I bet Just Born won't tell us) exactly how it's used...in combination with other ingredients, it could react to form different chemicals with different properties rather than just being diluted to low levels. DMacks (talk) 15:09, 19 February 2011 (UTC)[reply]

Why is the sun not purple or why is hydrogen plasma not yellow?

According to the article, hydrogen, the plasma state is purple, but the sun is yellowish, but it's made primarily out of hydrogen plasma. I know that black body radiation means that it should glow yellow according to that temperature, but then shouldn't hydrogen plasma glow yellowish too then if it's the same temperature? ScienceApe (talk) 16:28, 19 February 2011 (UTC)[reply]

What makes you think the purple discharge shown in the hydrogen article is at the same temperature as the Sun('s photosphere)? –Henning Makholm (talk) 18:00, 19 February 2011 (UTC)[reply]
The image of "purple" hydrogen is dominated by spectral emissions, not thermal radiation. Hydrogen plasma can be purple because one of the characteristic Hydrogen electron transitions (specifically, the 6→2 transition is purplish. In this particular lab-setup, that energy level has been stimulated.
The sun's spectrum is dominated by thermal blackbody radiation - that is, it is not spectral emission lines. Our sunlight article has a whole section on solar spectrum composition explaining the details. There are a few "peaks" and "notches" superimposed on top of the ideal black-body curve, due to photochemical absorption (i.e. spectral absorption) and atomic emission spectra inside the sun. The spectrum received at Earth's surface is further "notched" by spectral absorption of chemicals in the Earth's atmosphere (in the visible spectrum, most absorption lines are caused by water, CO2, and a few ions). Nimur (talk) 19:10, 19 February 2011 (UTC)[reply]
Well I assumed that plasma is still pretty hot. I know you can get cold plasmas but I just assumed that plasma was hot. I dono, maybe it's not hot. Could a black dwarf look purple in that case? ScienceApe (talk) 19:32, 19 February 2011 (UTC)[reply]
It's a question of optical depth; while the plasma emits more at the spectral lines of hydrogen, it also absorbs more in these parts of the spectrum. You can imagine a piece of plasma absorbing the incoming light at the spectral lines strongly, but absorbing far less (in relative terms) of the light between the spectral lines. This piece of plasma will then emit strongly at the spectral lines and weakly between the spectral lines. The resulting light will have a lower ratio of light intensity at the spectral lines and between the spectral lines than the incoming light. In other words, you see deeper into the Sun at frequencies which are not at the spectral lines of hydrogen, and if you would put many such purple laboratory plasma vessels behind each other, they probably wouldn't look purple anymore. Icek (talk) 20:34, 19 February 2011 (UTC)[reply]
You do get purple light from optically thin HII regions in the interstellar medium. The purple line is actually the red Hα line (6565 Angstrom), that is the 3-2 transition (6-2 is Hδ which is weaker and lies at the blue/violet end of the visible spectral range). These lines arise when an electron and a proton in an ionised hydrogen gas recombine to form a hydrogen atom; the electron cascades down to the ground level (n=1) and in the course of that cascade, many electrons go through n=3 and n=2, making Hα one of the strongest emission lines. In an optically thin gas, the photons emitted that way escape immediately. In an optically thick gas, they are reabsorbed very quickly. In the sun, the radiation originates deep in the core of the sun, and while diffusing outwards is scattered, absorbed and reemitted many many times, leading to thermalisation of the radiation and thus a black-body spectrum. This spectrum is modified by the outermost layer of the sun, just when the radiation leaves the sun. This modification takes the form of absorption lines, not emission lines. --Wrongfilter (talk) 20:46, 19 February 2011 (UTC)[reply]
I'm confused. How can it be possible that "purple line is actually the red Hα line"? This seems to be a non-sequitur or a typo... do you mean to say "the strongest line" is the red Hα line? Hα may be stronger, but is always red; Hδ may be weaker, but is always purple; their relative intensities depend on the plasma properties. Nimur (talk) 22:27, 19 February 2011 (UTC)[reply]
I should have said "the purple colour is due to...". Purple is a mixed colour and not a good designation for a single line. In astrophysics, I am not aware of any plasma where Hδ (which I would describe as violet) would be stronger than Hα; in those spectra that I am familiar with, it is much weaker. --Wrongfilter (talk) 23:08, 19 February 2011 (UTC)[reply]
My description might have been misleading; the reason for the lines being absorption is that the outermost layers are the coolest layers, therefore in these layers it will more often happen that an atom absorbs radiation than that it emits radiation. That's of course not the case with many many laboratory plasma vessels; the lines would vanish alltogether if there is a sufficient number of them. Icek (talk) 21:00, 19 February 2011 (UTC)[reply]
Note that file:Hydrogenglow.jpg shows a glow discharge in a cold plasma. The purple light is only there because the plasma is being pumped to a higher ionization than it would have at equilibrium for its temperature. (I think the spiral thread around the tube carries RF current to provide the excitation energy). The Solar photosphere is much hotter and in rough equilibrium between ions and neutral atoms, so it doesn't show strong emission lines, compared to its blackbody spectrum. –Henning Makholm (talk) 01:34, 20 February 2011 (UTC)[reply]

Common hazel (Corylus avellana)

Can the climate in the midwestern US support common hazel (Corylus avellana)? --75.15.161.185 (talk) 19:56, 19 February 2011 (UTC)[reply]

Given its range in Asia, the midwestern climate certainly can. However, our article on Corylus americana gives an indication that Corylus avellana is vulnerable to a type of fungus that is found in North America. Looie496 (talk) 23:50, 19 February 2011 (UTC)[reply]

comments on this experimental design to test homeopathy?

I'd like to test the basic premise of homepathy. I was thinking of doing it like this.

Description of experiment

Get thirty small plants, put them on a very large table, also six large, empty squirt bottles, and thirty small and thirty larger envelopes. Create thirty cards, ten with "A" on them, ten with "B" on them, and ten with "C" on them. Shuffle ten of the cards face down. Without looking at what is written on it, put each one in a small envelope, sealing it. Then shuffle all the small envelopes again thoroughly. Put each one in a big envelope, sealing that one as well. Shuffle the big envelopes thoroughly, and put one in front of each plant, without opening any of them.

Next set aside three of the six squirt bottles. These are supposed to be the pristine bottles, don't touch them again. However, do one thing: repeat the above procedure with three more cards, one labelled "A", one labelled "B", and one labelled "C". Shuffle them face-down, put one in each of three small envelopes and seal them, shuffle them again, and put one in each large envelope, shuffle them again, and tape one envelope onto each pristine bottle.

The remaining three bottles, handle like this. In one, put Nutri-Grow plant supplement. The second, fill with tap-water. In the third, put Nutri-Grow supplement, then dilute it with water. Mix well, and discard most of it. Dilute it with more water, and repeat, diluting, mixing well, and discarding. Per homepathy, the more you dilute in this way, the greater the homeopathic strength. Repeat the process until you have uber-strength.

Now there is just one step before you begin. You have to be blinded to which of earlier set-aside three bottles will contain which of your three concoctions.

So one last time, you create three cards, reading "1", "2", and "3", shuffle them face down, put one in each of three small envelopes and seal them, shuffle those face down, then put one in each of three large envelopes and seal them. Shuffle again, and put one in front of of each bottle you have handled. Finally, write the phrase "Label of what it is" on three large envelopes, and put them face-dwon and shuffle them. (So you only know that the large envelopes will say "label of what it is", but the handwriting can't give a clue). You write the actual name of the the concoction "nutrigro", "water", or "homepathy" respectively on a card. This time you don't shuffle them! Carefully keeping track of each one as you're doing it, you put the label of what it is on a card, you put the card in a small envelope and seal it, then you put the card in a large envelope that already reads "Label of what it is" and seal it, and you put it by that bottle.

So, this is what you have now. Next to each water bottle, you have two envelopes, one unmarked and one marked "Label of what it is".

The three pristine water bottles you haven't touched already have one envelope on them.

Now you leave the room and ask someone else to do one simple thing. Being careful to keep "label of what it is" with each used water bottle, they are to do the following. They open the unmarked envelope next to the used bottle, read a number "1", "2", or "3", and put the contents of the used water bottle into that pristine bottle (i.e. the first, the second, or the third pristine bottle). Then they tape "label of what it is" to that pristine water bottle. It's important that they do this one at a time and not mix up the "label of what it is".

Okay, now they leave, taking the used water bottles, the opened envelopes, and the cards 1, 2, 3. I only come back after they've left.

Now I'm properly blinded (and they also don't know which water bottle is A, B, or C). I have three squirt bottles, each with one unmarked and one marked envelope ("label of what it is") taped to them. On each one, I open the unmarked envelope, open the smaller envelope, take out the card, and tape it to the squirt-bottle. Now the squirt bottles are labelled "A", "B", and "C", and there is an unopened card on each one reading 'label of what it is'.

The next part is easy. I open the cards next to the thirty plants, so that they become labelled with "A", "B", and "C" randomly (not really in groups or anything). For a few weeks, I use the contents of the squirt bottle, A for the A's, B for the B's, and C for the C's, as a plant supplement. I can possibly tell which one is NutriGrow, because it's not just water, but I couldn't tell homeopathy from water, so I am absolutely unbiased and can't influence the results.

After a few weeks, I ask my friend to come measure the plants (I don't do it myself because I'll probably learn by then which one is NutriGrow). Now we have ten measurements for A, for B, and for C.

We do statistics. The prediction is that the data in one of them is clearly able to be differentiated from the data in the other two, since it involves real plant fertilizer. However, two of the three groups should be statistically indistinguishable.

The most amazing result would be if all three groups were clearly different with high statistical confidence. That would be pretty startling.

In any case, I would publish the results on the Internet. Then I would unblind, by reading "Label of what it is" on each of the bottles A, B, and C. There is one more room for shock here: maybe two of the groups were the same, but that's because homeopathy is JUST as effective as NutriGrow! So, if the group that got water supplement grow slower than NutriGrow, but the Homeopathy group grew just as fast as NutriGrow group, that would be pretty amazing. Also amazing is if Homeopathy group grew statistically better than water, but worse than NutriGrow. or, statistically better than nutrigrow, or statistically worse than water. all of these would be startling.

equally starting would be if water were more effecitve, statistically, than nutrigrow :) So, the experiment would also incidentally test that. The 'null hypothesis', if I'm using that word right, is that one group will statistically grow more, and the other two - the water and homeopathy - will be indistinguishable.

What do you guys think? 109.128.192.218 (talk) 23:38, 19 February 2011 (UTC)[reply]

I've collapsed the description, which really doesn't belong here anyway. To answer the question, what I think is that this is an awful lot of trouble to go to to show that homeopathy does not work on plants. Looie496 (talk) 23:59, 19 February 2011 (UTC)[reply]
If I have a table of appropriate size, it will take half an hour to bring plants in and set them up, maybe another half hour to do the blinding, and then about two to three minutes a day for, say, 30 days, i.e. another hour and a half. The cost would be probably less than $100. I think this is not too much work to have a scientific answer to whether homeopathy works. Moreover, I expect a one in twenty chance of showing that homeopathy works at the 95% confidence level, in which case I could probably get $10,000 for the research from the "industry". In other words, the expected value of the research of the search is 5% * 10k = $500 but the cost is only <$100. So, it's not too much work at all, really... Of courwse, because of the prior probability (the fact that homeopathy can't possibly work) 95% confidence isn't nearly enough to establish anything, but I think that woon't stop homeopathic companies from paying 109.128.192.218 (talk) 00:30, 20 February 2011 (UTC)[reply]
You might want to read our Homeopathy article for further ideas about testing its efficacy, and what such testing is claimed to have demonstrated so far. The Effect in other biological systems section of that article says something about homeopathic effects on "grain" but I could not find any other mention of your interest in homeopathy as applied to plants. I don't know much about it, but I had the impression it was mostly something believed to have effects on people. Your experimental design for the plant thing sounds okay, though I agree with Looie that it seems like a lot to go through to test something that really isn't even being claimed all that much by homeopathists afaik. So: why not work on coming up with some similar experiment with people (or maybe even some suitable lab animals?) as test subjects...? WikiDao 00:50, 20 February 2011 (UTC)[reply]
because if diluting shit until the water only has a memory of it makes the shit stronger, then it will work with nutrigrow. by taking the people out of the equation, if there IS an effect, my experiment would be super-famous forever. It's not like I'm testing a different mechanism from people-homeopathy. It's the same concept, minus placebo effect. 109.128.192.218 (talk) 01:56, 20 February 2011 (UTC)[reply]
So go for it. You'll want to get it published in a respected peer-reviewed scientific journal if you want to be super-famous forever taken seriously by anyone, though. WikiDao 02:09, 20 February 2011 (UTC)[reply]
Though keep in mind that the ability of people to find ways to make vague theories fit any data is nearly unlimited. "Oh, we never said it would work with fertilizer, or on plants," goes the obvious response. "Of course you'd get that result. We'd have predicted that ourselves using our own models." --Mr.98 (talk) 03:26, 20 February 2011 (UTC)[reply]
What should be done to the plants was explained by Terry Pratchett and Neil Gaiman in their book Good Omens. 66.108.223.179 (talk) 01:27, 21 February 2011 (UTC)[reply]
If the "basic premise" of homeopathy is that like cures like, then you're not testing it because you're not attempting to cure the plants of anything. (That is, after all, what the word "homeopathy" actually means.) You are also not scientifically diluting the mixture. At least those making the diluted preparations for homeopathic use do it very, very precisely: I see no indication in your method that you will do that. Also, homeopathic prepations are not only diluted, but shaken (the term used is succussion). Homeopaths claim that it is this succussion that imprints the chemical on the memory of the water: again, you do not mention that. Once you have copied exactly the processes used by homeopathic chemists, then you can start your experiment. Oh and don't expect the scientific world to take you seriously if your results are anything but "homeopathy doesn't work". There are plenty of studies that show positive results for homeopathy, but scientists take great pleasure in picking them apart and saying "oh the sample size was too small" or "it didn't work because homeopathy can't work, must have been something else that worked, therefore the experiment was flawed". Besides, if you approach the experiment with the intention to disprove homeopathy, then you have destroyed the experiment. Your intent also skews the result. --TammyMoet (talk) 09:05, 20 February 2011 (UTC)[reply]


I think you missed that for each step in the dilution process you need a clean bottle. Have anyone claimed that highly diluted fertiliser would work? If not I do not think your test will show anything important. (Homoeopaths will claim it is a straw-man). I think it would be better to test a commercialy available homoeopathic product (and analyse it to see that it really was highly diluted). Gr8xoz (talk) 16:32, 20 February 2011 (UTC)[reply]
If that did work, I think that farmers would be more interested than homeopaths since you'd save them a shed load of $$$ on fertiliser every year. Nature already does that test though pretty much - rainwater is very dilute nitric acid and nitrous acid but plant evidently grow faster when they have fertiliser too. SmartSE (talk) 23:21, 20 February 2011 (UTC)[reply]
That is a good point, and thus it may have relevance to people in the agricultural sciences, even if only for them to set it in stone that "this is wrong, so don't even try it if you thought of trying it". So yes, why not try the experiment. But I'll second the requirement that you must use a fresh clean bottle each time, because both you and homeopathists don't want the medicine "contaminated" by an actual active ingredient. SamuelRiv (talk) 18:56, 21 February 2011 (UTC)[reply]

February 20

gas central heating

how much is a new natural gas central heating system cost for a 1700 square foot home --Tomjohnson357 (talk) 01:36, 20 February 2011 (UTC)[reply]

That depends wildly on how the house is constructed and isolated and which climate the heating system will be dimensioned for, not to mention the labor costs in whichever economy you're located in. Probably easiest simply to ask some local heating contractors for estimates. –Henning Makholm (talk) 01:47, 20 February 2011 (UTC)[reply]

2 story house, cold climate, i just need a approximate cost --Tomjohnson357 (talk) 01:53, 20 February 2011 (UTC)[reply]

What kind of heat? Hot water? Steam? Air? If it's hot water then it's around $2000 to $4000 for the boiler. Hot air is cheaper, probably around $1000 to $2000. I'm assuming a high efficiency model (about 93% efficient), if you go with the 80% models (if legal in your area) prices are much lower. That's for parts, installation varies a lot but is probably in the $1000 to $3000 range. Ariel. (talk) 02:55, 20 February 2011 (UTC)[reply]

it is natural gas --Tomjohnson357 (talk) 03:42, 20 February 2011 (UTC)[reply]

I know; you said that. But how is the heat transmitted through the house? Hot water (i.e. radiators) or Air (i.e. air ducts)? Or something else? Steam? Hydronic AKA underfloor? Do you have lots of zones? A few? Just one? Anyway, regardless of the answers to my questions, I hope my estimate was useful. If you really want, then take a high quality picture of your current heater and I'll see if I can figure it out. Take one "overview" shot, and then closer shots of any parts or pipes that look interesting. Ariel. (talk) 09:39, 20 February 2011 (UTC)[reply]

Lab rats

Would it be reasonable to say that lab rats are bred to be prone to getting cancer? I read once that they were so sensitive to getting cancer, that changing the amount of food they eat can statistically significantly influence their rate of cancer. 75.138.198.62 (talk) 01:51, 20 February 2011 (UTC)[reply]

Actually, most lab rats used in modeling cancer are actually synthetically given cancer genes. If you're a doctor studying cancer and using rats to model it, you give your rats cancer, specifically the cancer you want to study. Otherwise, generic lab rats are no less likely to get cancer than you'd expect. --Shaggorama (talk) 03:00, 20 February 2011 (UTC)[reply]
(ec)No, that would not be reasonable to say. However there probably is a breed of rat that is very prone to cancer, but it's specific to that breed, not lab rats in general. Plus it has nothing to do with the amount of food they eat - they just get cancer randomly. Ariel. (talk) 03:01, 20 February 2011 (UTC)[reply]
I think you're confusing the Oncomouse with lab rats in general. --Mr.98 (talk) 03:25, 20 February 2011 (UTC)[reply]
No, the OP is actually correct, lab rats do develop tumors pretty frequently without any special provocation. I don't know of any evidence that they are more prone to it than wild-types, though, but I haven't ever looked into it. Looie496 (talk) 07:49, 20 February 2011 (UTC)[reply]
If they are, it's probably due to being highly inbred. Most lab mice & rats aren't truly "wild-type", but are inbred strains, which are chosen specifically so that experiments are affected as little as possible by genetic variation. The principle of heterosis would say that such strains are, in general, more susceptible to most diseases/conditions/ailments than wild-types. So I'd say that, aside from specific strains like oncomouse, it's not "bred to be prone to getting cancer", as the selective pressure was for genetically sameness, not cancer susceptibility. -- 174.21.250.120 (talk) 18:00, 20 February 2011 (UTC)[reply]
And note that changing the amount of food people get can also affect their chances of getting certain cancers: [8]. So, it's no surprise that this would happen in mice, too.
Also, if you are headed toward the conclusion that "lab rats get cancer from everything, so you can't draw any conclusions if they get cancer from substance X", then that's plain wrong, since they only look at the increase in cancer cases from an exposure to a given substance, versus the number of cases in the control group. StuRat (talk) 20:59, 20 February 2011 (UTC)[reply]

Measuring orbital and rotation periods

If there was nothing in the Universe except the Earth and Sun (in particular, no fixed stars to refer to), and the Earth's orbit around the Sun was perfectly circular, then how would we measure the length of one Earth orbit? If there was nothing in the Universe except the Earth, how would we measure the length of one Earth rotation, and how would we locate the Poles (ignoring the fact that it would be too cold and dark for life to exist)? 86.179.112.215 (talk) 03:04, 20 February 2011 (UTC)[reply]

People have argued that it might be impossible (Mach's principle), but I'll ignore that; you can imagine you're simply inside a dust cloud that blocks your view of the stars. In that case, you can measure the (sidereal) day length and find the poles with a Foucault pendulum (though it would be difficult to get the necessary precision), and you can calculate the year length from the difference in length between the sidereal and solar day. -- BenRG (talk) 04:16, 20 February 2011 (UTC)[reply]
Linear motion is relative - you can only detect it by comparing to something else, but circular motion is not. It's absolute, and you can measure it without reference to anything else. (Although you may run into measurement problems if the motion is very slow, it's just a matter of the necessary accuracy, not a problem in principle.) Ariel. (talk) 04:38, 20 February 2011 (UTC)[reply]
If this is a technologically advanced society, then they could also launch their own spaceships, and use those as a reference. Or, if they can measure the distance to the Sun, the length of a year could be determined by that, I believe, as each orbit has a distinct orbital speed. (However, without any other sources of info, they may not know about this relationship.) StuRat (talk) 20:51, 20 February 2011 (UTC)[reply]

Donating blood, good for health?

Can donating blood be good for your health? The article about that suggest that it can reduce the amount of iron in your blood, which can be good in the case of people with too much of it. It also suggest that it can improve the heart conditions on men. But, why is the latter plausible? And, wouldn't the reduction of iron in all men be beneficial? 212.169.189.114 (talk) 04:09, 20 February 2011 (UTC)[reply]

Haven't heard about the iron idea, but I reckon that a free mini-medical checkup every three months is good for me. HiLo48 (talk) 06:03, 20 February 2011 (UTC)[reply]
I remember reading some years ago a study that suggested that long time blood donors replenished their red blood cells faster after a blood loss. Don't ask me to find it, though.Sjö (talk) 08:46, 20 February 2011 (UTC)[reply]
Iron overload#treatment - the treatment is bloodletting. I don't know if the blood would be suitable for transfusion, though. I thnk I've heard something similar to Sjö regarding blood donors recovering blood cells faster, but I've been unable to Google anything up, so I'd be skeptical. Vimescarrot (talk) 11:49, 20 February 2011 (UTC)[reply]
Bloodletting is, from a medical perspective, equivalent to donating blood. However, it has come into disuse as a treatment along the centuries. 81.47.150.216 (talk) 13:00, 20 February 2011 (UTC)[reply]
...except for in cases like iron overload, as explained in the link that Vimescarrot gave in the very comment you are replying to. I've certainly read of cases where people regularly donated blood, and only later found that they suffered from iron overload, implying that the donating of blood had suppressed their symptoms like any other bloodletting. There was no suggestion that the blood had been rejected. 86.161.110.118 (talk) 18:20, 20 February 2011 (UTC)[reply]
Oh, I should also note that the article on bloodletting you link helpfully includes a few other diseases it is still considered useful for treating (although sometimes only in the absence of other treatment), including 'the fluid overload of heart failure' (?) and possibly high blood pressure. 86.161.110.118 (talk) 18:34, 20 February 2011 (UTC)[reply]
There's been some speculation that it might help prevent heart attacks in men. If I remember right, the hypothesis is that iron overload is the reason for the difference in heart attacks between men and women since women lose iron on a monthly basis, but no one's been convinced yet and honestly the whole gender difference there may be overblown (q.v. the red dress campaign here in the US and A). Charity in general tends to make people feel good about themselves, which is probably beneficial for health, but so far the only people that obviously benefit from blood donation are people with specific health issues, most frequently hemochromatosis and polycythemia. SDY (talk) 18:10, 20 February 2011 (UTC)[reply]
Does donating blood make you hungry ? (If you lose some portion of your blood sugar, it might.) If this results in you eating a steak, then you might replace all the lost iron, and then some.
Also, I would expect that donating blood could help with any excess nutrient in the blood, where that excess is harmful, and similarly could hurt where a deficiency is present. If both excesses and deficiencies exist, then determining whether it's a net benefit could be tricky. StuRat (talk) 20:45, 20 February 2011 (UTC)[reply]


Perhaps this is the case only because the evolutionary design for men assumes that men would be doing hard physical work every day like running for one hour. Count Iblis (talk) 15:05, 21 February 2011 (UTC)[reply]

Radiation

What dosage of radiation (only radiation, not any explosion that might come with it) would be required to kill the stereotypically-used-as-an-example healthy adult male immediately or within several seconds? What might be the source of this (I'm interested in a small, very radioactive substance such as that involved in the Demon Core accident, rather than a bomb)? I do not plan to try this at home and I suggest any readers to similarly refrain.~ 72.128.95.0 (talk) 04:15, 20 February 2011 (UTC)[reply]

page 2 of this NASA document for kids says that 8,000 rems (80sv) will be fatal within an hour and 10,000 rems (100sv) will cause instant death. Radiation_poisoning#Exposure_levels has some figures in Sieverts. Nanonic (talk) 05:56, 20 February 2011 (UTC)[reply]
See also Polonium#Toxicity for other figures on what is regarded as the most radioactive element. It states that a tiny tiny amount ingested or inhaled would be fatal. Nanonic (talk) 06:15, 20 February 2011 (UTC)[reply]
Polonium-210, which I imagine is the one you're talking about, is pretty damn radioactive, but very far from "the most radioactive". Grosso modo, the shorter the half-life, the more radioactive the isotope (although there are other variables such as the decay type and energy; for example tritium is not nearly as dangerous as its short 12-year half-life might suggest). For Po-210, the half-life is 138 days. But for, say, francium, the longest-lived isotope has a half-life of 22 minutes. You could kill someone with much less francium, assuming you could somehow accumulate it and get it into the victim's system before it all decayed. --Trovatore (talk) 09:22, 20 February 2011 (UTC)[reply]
Indeed. Polonium has a reputation for dangerousness because it's just lasting enough to actually be useful (e.g. as a poison, or, more commonly, as a component to neutron initiators). --Mr.98 (talk) 14:01, 20 February 2011 (UTC)[reply]

Calculating Jerk in a BUS...

Hi I have observed that while travelling in a bus, when bus passes over a speed breaker passenger sitting in front side of bus feel less jerk than those sitting on rear side of bus. although I don't know the exact reason behind this but I assume that it is due to psychology of driver, as when front tyres passes over the speed breaker the driver slow down the vehicle and just after that he speedsup without caring about rear passenger........... If there is any other reason please tell me.. Now what I want to know that ... How can I calculate in such circumstenses that In bus where will be least jerk in the bus. —Preceding unsigned comment added by 220.225.96.217 (talk) 05:17, 20 February 2011 (UTC)[reply]

Yes, I've noticed the same effect in both buses and cars, and it happens even at constant speed. I think it is a feature of the rear suspension, perhaps combined with a coincidence of the rear wheels receiving an upwards impulse just as the rear of the bus is already rising with the rebound of the front-wheel impulse. The effect varies with speed, and the mathematics is not simple because " jerk" is the third derivative of distance with respect to time. I'm fairly certain that somewhere close to the center of the bus will be optimal for "least jerk", but I'll leave this for someone else to prove. Dbfirs 08:18, 20 February 2011 (UTC)[reply]
Setting aside speed and suspension, which obviously affect the bounciness of everything, the rider's position relative to both tires matters. A normal (at least that I see, similar to File:NYC Transit New Flyer 840.jpg:) bus has one set of tires very near the front (often ahead of the whole passenger area) then the seats extend back to a few rows behind the rear tires. When the front tire goes up and down on a bump, it's like moving the whole bus as a lever pivoted at the back tire: the further forward or backward of the rear tire you are, the greater your vertical motion. Nobody is very far behind that pivot and the driver is the furthest forward (so maybe he's minimizing his bounce?). When the rear tire goes up and down up a bump, it's like moving he whole bus a a lever pivoted at the front tire: the further forward or backward of the rear tire you are, the greater your vertical motion. The driver is right near the pivot, so mostly just tilts forward/backward rather than actually bouncing up/down. Moving towards the rear, the vertical motion gets greater. Those behind the rear tire move even more up/down than the tire itself (being beyond it on the lever), which is a type of motion that nobody else experiences. DMacks (talk) 10:32, 20 February 2011 (UTC)[reply]
Jerk is the rate of change in acceleration. If F = ma is considered, then jerk is also the rate of change in force (with respect to time) per unit of mass. Use spring mechanics and lever rules to develop an approach. Plasmic Physics (talk) 11:57, 20 February 2011 (UTC)[reply]
Jerk as a technical term is the rate of change of acceleration. The connection to what the OP calls the jerk that a passenger "feels" is less than clear. --Trovatore (talk) 20:15, 20 February 2011 (UTC)[reply]
Yes, the body is most sensitive to rate of change of acceleration, but the eyes notice the maximum displacement and the velocity of the oscillation. Dbfirs 22:04, 20 February 2011 (UTC)[reply]
I'm guessing (this is speculation) that the body is most sensitive to things like maximum displacement of internal organs, or the energy dissipated within the body as the organs bounce around. There is not going to be any simple relationship between those things and the rate of change of acceleration. --Trovatore (talk) 22:10, 20 February 2011 (UTC)[reply]
Yes, these effects can be felt, especially if some organs resonate, but the body is most sensitive to the third derivative of displacement, and cannot feel either displacement or velocity. Acceleration is felt, but is not immediately noticed unless its rate of change is large. "Bouncing around" produces a large rate of change of acceleration and this is the jerk that is most noticeable. Dbfirs 23:00, 20 February 2011 (UTC)[reply]
My point stands, though. The human body has no "jerkmeter" per se (in the sense of an instrument that measures the third time derivative of position), and jerk in the sense of the third derivative has no special physical significance, unlike the second derivative. The third derivative is being used as a proxy for other things; I don't know what specifically, but I made my best guess as to two of them. The natural-language word jerk should not be confused with the third time derivative; the latter is a technical sense and does not correspond directly to any felt quantity. --Trovatore (talk) 10:00, 21 February 2011 (UTC)[reply]
I strongly disagree. Rate of change of acceleration corresponds to rate of change of force, and this is exactly what the body is most sensitive to. To see this, try travelling in a high-speed lift (elevator in your country). The high quality elevators in tall buildings have smooth acceleration that is barely noticeable, whereas the cheap lifts in smaller buildings produce a very distinct "jerk" on starting and stopping because they change very abruptly from zero acceleration to a large acceleration and back to zero. The mathematical term jerk (also called jolt, surge or lurch) for rate of change of acceleration was chosen precisely because it corresponds to the perceived jerk that the body is sensitive to. An alternative experiment would be to apply a force very gradually to any part of the body. You will hardly notice the force if it is applied very gradually (assuming that it is not large enough to cause damage), but if the same force is applied suddenly (with a large third derivative) then you will notice it much more. Dbfirs 14:01, 21 February 2011 (UTC)[reply]
If you think the body is directly sensitive to rate of change of force, then I think you're just wrong. The reason you have more reaction to force that increases quickly is that you have different equilibrium positions for internal organs depending on how fast you're accelerating. If the acceleration comes on suddenly, then they have to move to different positions, and will therefore bounce around. It's that bouncing that you feel, not the rate of change per se. --Trovatore (talk) 17:49, 21 February 2011 (UTC)[reply]
In addition to the comments above, another factor is the weight distribution on the bus, based mainly on the position of the engine. If the engine is in front, then the front will bounce slowly and the rear will move more quickly. With a rear engine design, this should be reversed. However, as people are often seated further behind the rear wheels than in front of the front wheels, the rear passengers might still have more of a bounce than those in front, even in this case. StuRat (talk) 20:32, 20 February 2011 (UTC)[reply]
Let me expand on my previous comment, based on what I said earlier, a "jerkometer" could be constructed using a spring with a mass attached, and a high speed video recorder, is what is needed. Record the spring as it jerks. Using spring mechanics, calculate the maximum force experienced ar the peak of displacement in the spring. Divide this force by the known mass attached to the spring, and divide the answer by the time ellasped between equilibrium and maximum displacement. Plasmic Physics (talk) 10:53, 21 February 2011 (UTC)[reply]
While we're on the topic, what is it that kills a person in a fall from great height? The high deceleration or the high (dejerk?)? Plasmic Physics (talk) 10:57, 21 February 2011 (UTC)[reply]
It's the high deceleration that kills. Jerk is irrelevant here since it is the force that kills, not its rate of change. I like the jerkometer, though it measures average jerk not instantaneous jerk, but with suitable spring constants it would be an excellent instrument for use at the back of a bus. Dbfirs 14:10, 21 February 2011 (UTC)[reply]
If you measure displacement (either maximum or time-average), don't you still only have a forceometer, not a jerkometer? As Dbfirs says, neither rapid acceleration nor large total acceleration over a long time are easily felt as a sudden change of acceleration. Mass on a spring is a pretty good approximation of a person though (organs and stuff suspended in all the goo and stringy stuff). If we don't easily feel constant forces, we wouldn't care about a constant string stretch either. Fighter pilots care about maximum force ("g"s) for safety, but that could still be a smooth acceleration in a tight turn rather than the jerk of an aircraft carrier take-off. The jerk would be the rate of motion of the mass on the spring (per the third-derivative)...if it's bouncing up and down, so's your lunch, if it's stretched stably, you more quickly become accustomed to it. DMacks (talk) 17:37, 21 February 2011 (UTC)[reply]

Upper bound of the size of the universe

Tom Murphy says: "All we are prepared to say for now is that over 13.7 billion light year scales, the universe looks pretty flat: it doesn’t deviate by more than 2% from being flat. But, the possibility exists that the universe is still curved on much larger scales. It’s just like the fact that the earth looks flat locally, over small scales, but is curved on the whole. The universe could be closed into a sphere, but on a much larger scale than what we can see. A 2% limit translates to a factor of 50 (it takes 50 2%’s to make 100%), so we could say that if the universe is finite, it must be at least 50 times bigger than our 13.7 billion light year horizon."

If we use the same conditions (flat universe to a certainty of 2% and assuming the universe is finite), what is the upper bound on the volume of the universe? Leptictidium (mt) 09:05, 20 February 2011 (UTC)[reply]

Even allowing your assumption that it's finite, why should there be an upper bound? It would be rather odd if we could say "either the universe has radius at most 10^70 gigaparsecs, or else it's infinite". Not being an expert I can't rule out that cosmologists can make such a statement, but it seems very unlikely to me. --Trovatore (talk) 09:11, 20 February 2011 (UTC)[reply]
I was assuming that a maximum uncertainty of 2% also sets a maximum uncertainty for the size of the universe. Leptictidium (mt) 09:25, 20 February 2011 (UTC)[reply]
It doesn't deviate by more, but it could by less. So it's a minimum bound, not an upper - and the upper should include infinity. BTW I'm not convinced your math of: 2% deviation = 50 times for a circle is correct. I guess it depends on what he means by deviation. Ariel. (talk) 09:31, 20 February 2011 (UTC)[reply]
Lepictidium, are you aware that according to the simplest GR models, a universe that's flat or negatively curved is infinite, whereas one with positive curvature is finite? That means that "the universe looks pretty flat" means "the curvature is right on the borderline between predicting a finite universe and predicting an infinite one". As far as I know the question is still within experimental error.
I believe there are more sophisticated possibilities that don't observe this strict dichotomy. Certainly it's not a geometric necessity; for example the flat torus has zero curvature but finite size. Whether there are any GR solutions that look like a flat torus, or whether they are consistent with observations, I have no idea. --Trovatore (talk) 09:46, 20 February 2011 (UTC)[reply]
Murphy's argument is rather dubious. He converts a measurement of the geometry (curvature) to a limit on the topology (the size) of the Universe which is not possible. It is true that a positively curved Universe is necessarily closed (this refers to the curvature of 3D space) but the topology is, as far as I know, not unique, meaning that it does not necessarily have to be a (hyper-)sphere, as Murphy assumes. Flat and negatively curved spaces can be infinite but do not have to be (the flat torus is indeed a possibility). There are lower limits on the size of a finite Universe but they do not come from measurements of the curvature. One could also nitpick on the use of lookback time as the measure of the size of the horizon but we'll let that pass. --Wrongfilter (talk) 10:05, 20 February 2011 (UTC)[reply]
I don't think it is true that a positively curved universe is necessarily closed. I don't see why you couldn't take a Euclidean 3D space and assign a constant metric with positive curvature to every point. There is in general only a very weak connection between local geometry and global topology of a space. Looie496 (talk) 17:42, 20 February 2011 (UTC)[reply]
This is one of the strongest connections, see e.g. Page 12 of this article. The reason is that the hypersphere S3, the universal covering space of any positively curved space, is compact. Incidentally, "Euclidean" is "flat". --Wrongfilter (talk) 17:56, 20 February 2011 (UTC)[reply]
There are infinite spaces that are positively curved everywhere, such as a paraboloid, but there are no infinite spaces with constant positive curvature. The data can't rule out a universe with that shape (they can't rule out much of anything, unless it's so small that a significant fraction of it fits inside the visible universe). -- BenRG (talk) 19:57, 20 February 2011 (UTC)[reply]
"constant" is of course important. That is implied if the cosmological principle is assumed to hold. --Wrongfilter (talk) 20:02, 20 February 2011 (UTC)[reply]
Our article on this is Shape of the Universe. See also Doughnut theory of the universe. Red Act (talk) 15:39, 20 February 2011 (UTC)[reply]

Decay constant and half-life

We did an experiment in which there were originally fair coins. Every minute, all the coins were tossed, and those that landed heads were removed. Then the half-life of this "decay" is clearly one minute, and so , roughly 0.7 per minute. However, if is the probability that a nucleus decay in unit time, why is this not equal to 0.5? jftsang 12:07, 20 February 2011 (UTC)[reply]

One reaaaally hand-wavy way of explaining why there's the ln(2) term in there (i.e., the decay rate constant is greater than the half-life) is that a real exponential-decay isn't a process of "sit unchanged for the time of one half-life, then suddenly and intantly half decays", but rather half decays over the time increment. So some of the material that is observed to be gone after one half-life would actually have decayed before that full half-life time was reached. As a result, the rate of decay for all the decaying particles together is faster than just the ones in the "half that decays in the half-life" that manage to last to the end of the half-life before actually decaying. Our exponential decay article has a lot of gory math details proving the formula, if you prefer a "because the formula says so, but why is the formula correct and used at all?" type of answer. DMacks (talk) 14:15, 20 February 2011 (UTC)[reply]
Imagine that instead of tossing coins and than waiting a minute for the next round of tosses you were to toss a coin every (minute/) for the first minute. Obviously that would lead to the same result - that is tosses after one minute. That would be a slightly more realistic representation of an actual decay process where half of the atoms don't simply decay all at once after one halflife. But then you would have to abruptly drop the rate of tossing to one toss every (minute/). That abrupt change of pace shows that there was something wrong with what you were doing. You should have started with a slightly faster decay pace at the beginning (Because there were more coins to begin with and the pace of decay is proportional to the number of coins)
and than slowly easied that pace so there would be no discontinuity at the pace after the first minute. That means that the original decay rate must have been higher than the naive 0.5 per minute. It turns out to be about 0.7 per minute. Dauto (talk) 15:52, 20 February 2011 (UTC)[reply]

I hate to say this, but the previous answers will confuse you. From the half-life article, two equivalent formulas are

Note that the first formula uses only 1/2 as a number value and is very easy to work with, but you've used the second formula, which has certain appeal to mathematicians. Now why does your lambda value come to 0.7? Because ln 2 = 0.693147181 ! The 0.7 occurs in your formula solely to cancel out the "e" (the base of natural logarithms) which has been added into it, and convert it back to 1/2. Wnt (talk) 18:06, 21 February 2011 (UTC)[reply]

I thought my answer was pretty good. Dauto (talk) 19:26, 21 February 2011 (UTC)[reply]

Quark Gluon Plasma Experiments - trillions of Celsius degrees ?

I was reading article about Quark Gluon Plasma and I noted that this experiments indicates that the temperatures in this phenomenons reachs around 4 trillion degrees celsius. My question is how can our devices in acelerators support this extremly high temepratures without burn out everything ? How can we keep this heat / energy isolated inside acelerators ? — Preceding unsigned comment added by Futurengineer (talkcontribs) 12:29, 20 February 2011 (UTC)[reply]

That temperature is achieved in a collision between two ions. It occupies a microscopic volume and last for a very small amount of time, cooling down as it expands. Dauto (talk) 15:27, 20 February 2011 (UTC)[reply]
I do not think that is a big problem, Quark Gluon Plasma is created when two heavy atomic nucleus collide, the volume of the plasma should be of the same order of magnitude as the volume of a nucleus 10 fm= 10*10^-15 m =0.00 000 000 001 mm. This plasma will expand and cool down very quickly, since all particles have relativistic speeds I would assume that it only exist for something like 10^-21 s = 1 zs. This happens in the centre of a cooled vacuum pipe a few centimetres across. By the time the particles in the plasma (or the produced particles) reach the pipe wall they will be very sparse. If we scale up this so the plasma is the size of a atomic bomb (r= 0.1 m approx.) then the container would have a size of the same order of magnitude as the earth's orbit around the sun. Of course Quark Gluon Plasma is much much hotter than an atomic bomb and there are several million collisions per second. I am no expert on this so other editors can maybe give a more detailed answer. Gr8xoz (talk) 15:44, 20 February 2011 (UTC)[reply]
An addition to what was said above: at a certain point in physics, you have to separate your "idea" of temperature from the definition of temperature, just as with every other "idea" that gets weirder as physics gets larger/smaller/hotter/colder. Temperature is defined, at the level of first principles (the basic axioms of classical physics), as the amount of heat energy needed to increase entropy (disorderly messy behavior of particles) by a given amount. So at these trillion-degree temperatures, relatively-enormous amounts of energy are needed to change the arrangement of the system just a tiny bit, but the particles in the accelerator don't necessarily "feel" hot (also because they don't really hold their own heat in like objects that we see around us do). For educational fun, if you want to reverse the definition above such that adding heat lowers entropy, you get negative temperature. SamuelRiv (talk) 19:23, 21 February 2011 (UTC)[reply]

cell wall in plant cells

I would like to know what is the role of cell walls in lifespan of a plant cell? Does thicker cell wall mean longer life or it is not so? And can plant cells live for 100s of years? If yes then how do they stay alive for such long time and is there any role of cell walls there?? — Preceding unsigned comment added by Ptamhane (talkcontribs) 16:04, 20 February 2011 (UTC)[reply]

We know that plants can live thousands of years so it stands to logic that plant cells can live thousands of years (Unless you meant live that long without going through cell division in which case I don't know the answer). I don't know whether cell walls play a role at all. Dauto (talk) 17:21, 20 February 2011 (UTC)[reply]
That is not necessarily true. Many trees shed their leaves every year. The living part of the bark may expand outwards every year, with the inner part dying. So a tree could live a thousand years, but that need not imply that any cell within it lives for that time. In the same way, a Chinese dynasty may exist over a thousand years, but none of its individuals lives a thousand years. 92.28.245.90 (talk) 17:51, 20 February 2011 (UTC)[reply]
The disagreement here are over the definition of the age of a cell. When a cell divides in to two do you get two new cells or two cells that each are as old as the original cell. If I split a stone in halves the halves are as old as the original stone but if we calculate the age of cells in the same way then we can argue that all cells are 3.7 billion years old and that does not seems to be a useful definition. (It is unclear how such a definition applies to sexual reproduction.) The problem are that after the cell division there are no distinction between child and parent. Gr8xoz (talk) 19:10, 20 February 2011 (UTC)[reply]
If we say that all cells are the age of their parent, plus that cell's parent, etc., then aren't all cells the same age (of approximately 3-4 billion years), dating from the first cell ? Perhaps you had in mind an exception for the reproductive cells in sexual reproduction, but some plants reproduce asexually, so this doesn't seem like a sensible way to measure the age of cells, in such a case. StuRat (talk) 20:20, 20 February 2011 (UTC)[reply]
As I said "if we calculate the age of cells in the same way then we can argue that all cells are 3.7 billion years old and that does not seems to be a useful definition.". I just wanted to explain why the previous answers were contradicting. I do think it is more useful to measure from the latest cell division just as you seem to do. I do not how ever feel that this must be obvious for everybody so I think the age (lifespan) of a cell need to be defined before the question can be answered. Gr8xoz (talk) 21:10, 20 February 2011 (UTC)[reply]
The length of time a cell is viable is programmed into it. Programmed cell death in plant tissue. Look at very old trees and you'll see that they are hollow with just the outer layer still living. The wood itself is dead. --Aspro (talk) 17:36, 20 February 2011 (UTC)[reply]
Exactly. Only the layer between the bark and the wood is alive. And some of the cells in that layer may have been there for a long time. The question is how long? Also, How long do they live between cell division cycles? I'm not sure what exactly the OP has in mind. Dauto (talk) 18:04, 20 February 2011 (UTC)[reply]
The OP is simply asking if its the morphological characteristic (e.g. thickness of cell walls) or something else. --Aspro (talk) 18:22, 20 February 2011 (UTC)[reply]
I'm not sure about any direct relationships between cell wall thickness and longevity, but the trees with the longest living leaves tend to have thicker cell walls. This is because they live in dry (including cold) or nutrient-poor habitats where they are only able to photosynthesise very slowly, so to end up with a positive carbon gain over the lifetime of the leaf, they need to be kept for a long time. Thick cell walls help with this because they are harder for microbes to penetrate, deter herbivores as they are relatively inedible and in cold areas withstand ice blasting and allow water to be stored in the apoplast where it can freeze without damaging the inside of the cell. These papers discuss the inherent trade-offs that plants make when 'deciding' how to construct a leaf and sort of shows what I mean. I can't find any references, but I'm fairly sure such cells will be the plant cells with the longest life spans - most of a tree trunk is dead - as has been said, and the same applies to the large parts of the roots. I do have a paper reference showing that some leaves live for up to 12 years so this is probably the upper limit, unless you want to include embryos in seeds. SmartSE (talk) 23:46, 20 February 2011 (UTC)[reply]

Gholson, Mississippi

What is the population of Gholson, Mississippi, and what is the population? --Perseus8235 17:02, 20 February 2011 (UTC)[reply]

The United States Census Bureau does not recognize "Gholson" as a census designated place (nor as an incorporated town, city, or other municipality) in Mississippi. You can verify this at the official Census Factfinder website. There is a Gholson, Texas, nowhere near Mississippi. If an unofficial place informally called "Gholson" is located in Mississippi, no authoritative statistics exist for its population. Nimur (talk) 17:33, 20 February 2011 (UTC)[reply]
Before posting a request for help here, Perseus began an article about this community. I've found some sources and added them to the article, but a population source is not among them. As an unincorporated community, it has no official boundaries: as a result, it can't possibly have a specific population. Nyttend (talk) 21:19, 20 February 2011 (UTC)[reply]

Why Doesn't the recoil from Handguns hurt?

Although I have never been shot by a bullet, I've read anecdotes on the web that being shot with a handgun bullet while wearing a bulletproof best is similar to being struck by a sledgehammer or a baseball bat. From Newton's Third Law would this not mean that the recoil of the gun would feel equal or more than being hit by a sledgehammer/bat? Having fired handguns, I do not feel like I'm being hit by a sledgehammer each time I pull the trigger. Acceptable (talk) 02:50, 20 February 2011 (UTC)[reply]

This was asked on the misc desk, some of the regulars here might have some insight. CS Miller (talk) 17:40, 20 February 2011 (UTC)[reply]
Because the bullet is accelerated over the entire gun length, but decelerated over a much shorter distance (and hence in a much shorter time). 213.49.110.216 (talk) 18:04, 20 February 2011 (UTC)[reply]
And that's why powerful guns need longer barrels. Dauto (talk) 18:07, 20 February 2011 (UTC)[reply]
I do not think that is correct, I think the bullet leaves the barrel before any significant momentum is transferred to the body so the velocity that the gun hits you with are dependent on the generated momentum and the mass only. See [9]Gr8xoz (talk) 18:56, 20 February 2011 (UTC)[reply]
I believe longer barrels are for increased accuracy, since that ensures that it will leave the barrel at a more precise heading towards the target. Some rather high-powered guns, such as mortars, have rather short barrels, but aren't as accurate. StuRat (talk) 20:35, 20 February 2011 (UTC)[reply]
Newton's Third Law states that if two objects interacts directly then they will be affected by forces of opposite direction but the same magnitude. This apply to direct interactions with no time delay such as the force between the bulletproof vest and the bullet, it does not apply to indirect interactions such as between the gun and the bulletproof vest.
Newton's Second law is more interesting, it states that the same momentum but of opposite direction is needed to accelerate and to stop the bullet. (We will ignore air resistance and recoil from combustion gases.) Momentum can be calculated as mass multiplied by velocity or force multiplied by time. Since the bullet plus the moving part of the bulletproof vest has less mass than the gun it follows that the bulletproof vest will hit the body with a higher velocity than the gun. If we assume that the forces on the bodies in both cases is the same (not necessary to be true but hard to estimate and it illustrates the point.) then it will take the same time to stop the gun and to stop the bullet and the moving part of the vest. Since the vest hit the body at higher velocity it will move longer in that time so it will do more damage to the body. Gr8xoz (talk) 18:44, 20 February 2011 (UTC)[reply]
In addition to the force of the bullet strike being more compressed in area and time, I can think of two other factors:
1) The bullet may strike in an area less able to withstand high forces than the hand.
2) The victim is more likely to be surprised by the shot, and the lack of psychological preparation may make it seem worse than it would otherwise. StuRat (talk) 20:08, 20 February 2011 (UTC)[reply]
So you think that the human hand is more able to withstand a bullet strike than other parts of the body, and a gunshot will do less damage if one is psychologically prepared for it? I would not recommend putting either of those postulates to the test. SpinningSpark 21:01, 20 February 2011 (UTC)[reply]
I said it would "seem" worse, not "be" worse. Do you not know the difference ? As for the hand being more able to withstand forces than some other parts of the body, then that certainly is the case for some body parts, like the eyes. And I'm not talking about a direct bullet strike, but rather one felt through a Kevlar vest. StuRat (talk) 22:28, 20 February 2011 (UTC)[reply]
The forwards momentum of the bullet hitting the body is the same as the backward momentum of the gun (assuming no loss in speed from air resistance), so the impulse on the body is the same. What the questioner has not taken into account is that this impulse is really quite small, initially moving the body at less than an inch per second. The effect is noticeable only if it occurs as an impact over a small area, as with a bullet hitting a kevlar vest or a gun held lightly an inch or two from your face. Normally, a gun is held firmly so that there is no impact with a part of the body, and the recoil momentum is hardly noticeable as it is transferred to the body, though a shotgun recoil can be painful on the shoulder if not held firmly. If the bullet is caught in a thick pad that absorbs the energy over a sufficient distance and area, then the effect on the body will be similar, though perhaps more noticeable if unexpected. The film cliche of victims being thrown backwards by a bullet is entirely faked. It would take a hit from a missile launcher to produce the momentum effect depicted. Dbfirs 21:41, 20 February 2011 (UTC)[reply]
I thought of another reason:
3) The hand, being on the end of the arm, is more free to "recoil" than most of the body. Thus, the force is more gradually absorbed by the hand, wrist, elbow, and shoulder, instead of all being absorbed by the hand immediately. I suspect that if the hand was held firmly in a fixture while the gun was fired, it would hurt more and do more damage. As for shotguns held against the shoulder, the body likely still recoils somewhat, although not as much as the entire arm. StuRat (talk) 22:34, 20 February 2011 (UTC)[reply]
I've never shot a handgun, but I always thought that best advice was to hold it firmly, thus transferring momentum to the body rather than allowing the gun to recoil freely and hitting a part of the body. You would notice the recoil if you shot with an outstretched arm with the direction of firing at perpendicular to your arm, but you will normally only feel pain if you allow the recoiling gun to hit you or you allow your recoiling hand to hit a hard fixed object. Dbfirs 22:53, 20 February 2011 (UTC)[reply]
I have shot several handguns, including 9mm, 0.45 and a 50 Cal revolver which would better be described as a hand cannon then a hand gun:). My take is, what sturat mentions just above, that your arm actually makes for a very good shock absorber: when you fire, your arm, not having a lot of mass, recoils with the gun over a certain distance. With the 50 cal in particular, you are strongly advised to take your head out of the way before you fire because your hands are almost guaranteed to travel backwards past the point of your head/face, regardless of how firmly you are gripping. When you get hit by a bullet however, your body, having a lot more mass, doesn't travel back very far at all, so the distance over which the energy is transferred is much shorter. Vespine (talk) 23:19, 20 February 2011 (UTC)[reply]
I would look at it in terms of "kinetic energy transfer" (though the "Force/area" issue and other points mentioned above are also relevant, I agree). So wouldn't the following reasoning be valid?
Because the momentum of the handgun must equal the momentum of the bullet, the energy transferred by the handgun to the hand is much less than the energy transferred by the bullet to the target.
Consider the M1911 pistol which weighs about 1100 grams and fires .45 caliber slugs which weigh say about 11 grams:
and since
we also have
The kinetic energy of the slug is
and that of the gun is then
So, in this example, the kinetic energy of the handgun is only one-hundredth that of the slug. The kinetic energy absorbed by your hand is therefore far less than the kinetic energy absorbed by, say, a human body-part struck by the slug, which is why there is less tissue damage to your hand than there is to your target. WikiDao 00:35, 21 February 2011 (UTC)[reply]
I concur with WikiDao completely. The transfer of energy to the shooter yields the perceived recoil and the shooter must present adequate force (F=ma) to counter the effects...a product of body mass and resistive force (muscular). But who says they don't hurt?
Since the question was about the impact of the bulletproof vest on the body and not about the impact of the bullet the mass of the moving part of the vest should be added to the mass of the bullet in the calculation. So if then the vest would impact the body with 20 times the energy of the guns recoil. Gr8xoz (talk) 04:04, 21 February 2011 (UTC)[reply]
(... but much less than the energy of the bullet.) The vest is designed to absorb and spread energy, so probably much less than you calculate because energy is lost in the collision of the bullet with the vest. Dbfirs 07:53, 21 February 2011 (UTC)[reply]
My calculation is correct, I assume a fully inelastic collision between the vest and the bullet, this means that the vest absorbs as much energy as possible. After a inelastic collision with a stationary object the bullet and the object will have the same momentum as the bullet had before and this can be used to calculate the velocity and kinetic energy. The problem is that the west does not move as a solid object hit in the centre of gravity so "equivalent" mass of the part that move has to be calculated. To do this in detail you need a detailed simulation of the deformation of the west. I assumed 44 grams for a soft vest, a ceramic plate has of curse higher mass. I assume that the collision between the vest and the bullet is much faster than the collision between the vest and the body. Even if the vest is "designed to absorb and spread energy" it must follow the laws of physics.--Gr8xoz (talk) 12:22, 21 February 2011 (UTC)[reply]
Yes, I initially mis-read your reply, and I agree with your calculation if the bullet hits the vest and the vest then hits the body, but the interaction is, as you say, more complex, with the vest being, to some extent, molded to become part of the body. Energy transfer is diffused, but transfer of momentum is simple and small. Dbfirs 13:39, 21 February 2011 (UTC)[reply]
It seems like there is a hidden punch in every bullet. Because when one person punches another, both hand and ribs suffer the same force of impact; yet the hand usually suffers less. And of course, "punching" with the barrel of a pistol makes the inequality greater. Wnt (talk) 17:54, 21 February 2011 (UTC)[reply]
Something else that helps to visualize this is an ice pick. If you hit someone with the business end, it will do far more damage to them than to you, both because of the decreased area (and thus increased pressure) at that end, and because the force is all applied to them at once, versus over maybe a half second to you, if you gave it a full swing. StuRat (talk) 19:44, 21 February 2011 (UTC)[reply]

Basic Phsyics

If I am going to shoot a bullet from 5m directly at the ground. At the same time as I shoot the bullet, I will drop a bullet from 5m towards the ground. Assuming same weight bullets and the rifle won't slow down the bullet, will both bullets hit the ground at the same time? —Preceding unsigned comment added by 12.180.137.195 (talk) 19:13, 20 February 2011 (UTC)[reply]

Yes they should. Both the forward moving and horizontal moving bullet will arrive at the ground at the same time. *Ignoring air resistance* See http://media.pearsoncmg.com/aw/aw_0media_physics/hewittvideos/projectile_demo.html --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 20:08, 20 February 2011 (UTC)[reply]
You've misread the question. It said the bullet is fired "directly at the ground", not parallel to it. StuRat (talk) 20:12, 20 February 2011 (UTC)[reply]
No, the bullet fired from the rifle will hit first, as it will leave the rifle going faster than the dropped bullet. The final velocity depends on both the acceleration due to gravity and the initial velocity. Note that I am assuming that "5m" means 5 meters. If you meant 5 miles, then both bullets would be going at terminal velocity by the time they hit (the dropped one having accelerated to that point and the fired one having decelerated). However, even in this case, the fired bullet would have moved faster initially, so would still hit first. StuRat (talk) 19:58, 20 February 2011 (UTC)[reply]
I think the OP has misunderstood a common thought experiment where the bullet is fired horizontally at the same moment as a other bullet is dropped. The air resistance can complicate things a little bit but essentially it is expected that the bullets hits the ground simultaneous as long as the ground can be approximated as flat. See [10] Gr8xoz (talk) 20:13, 20 February 2011 (UTC)[reply]
See the article External ballistics for more background.
⋙–Berean–Hunter—► ((⊕)) 01:53, 21 February 2011 (UTC)[reply]

Light waves/photons

Are light waves composed of photons moving in a wave light pattern? --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 19:55, 20 February 2011 (UTC)[reply]

Unfortunately it is not that easy se Wave–particle duality Gr8xoz (talk) 20:03, 20 February 2011 (UTC)[reply]
(ec) No, "one photon" is just a certain amount of the energy in a light wave. It doesn't have a location in the wave. It's similar to asking whether your checking account is composed of one-cent/penny/yen coins. -- BenRG (talk) 20:10, 20 February 2011 (UTC)[reply]
That's not entirely true. For example, if you shoot one photon at a target, there will be exactly one collision. If you shoot half of two separate photons at it, there could be anywhere from zero to two collisions. The difference is more pronounced with fermions, which have a probability of zero of being in the same place, affecting the wave-form. — DanielLC 21:04, 20 February 2011 (UTC)[reply]

The easiest way to imagine wave-particle duality is that the wave is chopped into small sections, each with a fixed energy.
Actually, the wave is a probability wave, and the square of its amplitude is the probability. So, the concept of "photon" only comes in place only when you know where it is for sure, i.e., the probability at that point is 1. Otherwise, the photon is nowhere and everywhere at once, so its better to think of it as a wave. The same applied for matter. ManishEarthTalkStalk 13:34, 21 February 2011 (UTC)[reply]

Faking cold fusion

On the JANUARY 15th FOCARDI AND ROSSI PRESS CONFERENCE an demonstration was done that supposedly shows cold fusion. The researchers them self present it on [11]. They have got some media attention such as [12], [13] and [14]. Many images can be found at [15]. They demonstrated 12 kW heat generation by boiling water in about one hour. This is so much energy that it seems to rule out a mistake. This leaves two possibilities:

  1. A fraud.
  2. An interesting nuclear process such as the fusion of nickel and hydrogen that the researchers suggests.

If this is true it will be truly revolutionary but I think experience show that this type of demonstration is probably some kind of fraud. It is also hard to explain theoretically how it can be possible to get cold fusion and why there are so low radiation levels.

Some "experts" have calculated that it would be hard or impossible to hide enough chemical energy in the device to fake it that way. (They calculated the amount of hydrogen and oxygen needed.) One suggestion on how to fake this that I have read in an article comment is that they did not feed it with water, instead they could have used hydrogen peroxide. Hydrogen peroxide decomposes in to oxygen and water vapour when it come in contact with a catalyst. Would the audience detect the difference between water and hydrogen peroxide by smell, viscosity, colour and so on??? Does anybody has any other suggestions in how they can have faked this? Gr8xoz (talk) 20:59, 20 February 2011 (UTC)[reply]

I saw mention of nickel powder - if they used aluminium or magnesium powder I doubt anyone could tell the difference and those would have provided plenty of energy to boil water. As a general point of view scientists are rather bad at detecting deliberate fraud. A professional magician would be much better at that. Ariel. (talk) 21:36, 20 February 2011 (UTC)[reply]
As I understand it it was claimed to be a rather small amount of nickel powder but since the inside of the device was not inspected nobody except the people that build the device knows if there are aluminium or magnesium powder in it and in what amount. Do you mean the heat generated when magnesium reacts with water? Would that really be enough energy? Would not the magnesium compounds be easily detected in the steam, a white powder? Gr8xoz (talk) 22:17, 20 February 2011 (UTC)[reply]
I was just saying that measuring the volume of hydrogen needed is pointless - there are plenty of other fuels you can hide that are much smaller. Also, what was in the compressed gas cylinder? It was labeled as nitrogen - but that doesn't mean that it's the truth. Plus did anyone check that the amount of steam really included ALL the water? Perhaps they drained some water and only made a small amount of steam? Ariel. (talk) 00:19, 21 February 2011 (UTC)[reply]
The device was connected to a hydrogen bottle during the experiment(suposedly fuel for the fusion proces H+Ni->Cu), it was weighted by supposedly independent persons before and after the test and lost about 1 g H2. The nitrogen was connected afterwards to shut-down or clean the device. I think the calculation assume that since the device have no air inlet or exhaust pipe it would need to contain any oxidizer used and that the exhaust would need to be undetectable in the steam. As I understand it the device is rather small and mounted so that a drainage pipe could not have been hidden. I have not checked all the details. Gr8xoz (talk) 03:37, 21 February 2011 (UTC)[reply]
I skimmed the 3 videos and I didn't see any of that. You should utterly ignore what they said, and only go by what you can actually verify. Ariel. (talk) 04:07, 21 February 2011 (UTC)[reply]
Thank for your answers. I can not verify anything, the videos could easily been faked. It could of curse be so that all journalists and "independent" observers there was in on it or that some was easily fouled and the rest was in on it. If we assume the observers were not in on it how cold it be faked? Given that respectable magazines like this [16] (Swedish) write about it I assume they could not just make up the names. So I think the report[17] from the observers should be given some credibility. (Not that I blindly trust it but I am much less suspicious on that than something Rossi him self has written.) --Gr8xoz (talk) 11:55, 21 February 2011 (UTC)[reply]
Maybe I missed it (I was skimming since I don't speak the language in the video), but weren't they not even in the room? It looks like they did the demo by video? Also, journalists do not attempt to detect fraud, they simply want to report what happened, and the more interesting the better. Please remember: I have NO idea if this is real or not, all I can tell you is that there were opportunities for fraud. But I can't tell if there actually was fraud. Ariel. (talk) 21:09, 21 February 2011 (UTC)[reply]
Since the observers report that they used there own power meters, thermometers, scale and radiation detectors I would assume that the demonstration was not just a video recording. Obviously some videos show a presentation and some show the actual demonstration. --Gr8xoz (talk) 22:42, 21 February 2011 (UTC)[reply]
I know that James Randy says that magicians are better than scientists in detecting fraud but I think that in cases like this you need both understandings of illusions and science to be qualified to detect any fraud. Gr8xoz (talk) 22:24, 20 February 2011 (UTC)[reply]
It's not so much that scientists can't detect fraud as that they don't evaluate it, preferring a different standard. Randi and the scientists would both agree that in order to start looking for fraud you'd have to be able to get there, look over the equipment, see where the hydrogen comes in from, test the amperage in the electric cables and so on. But scientists have the more exacting standard that someone else has to be able to simply read the report about the experiments, construct his own apparatus, and create cold fusion on his own. This clearly makes fraud difficult to perpetuate - though it is true, often "irreproducible results" are discounted by scientists without anyone ever knowing whether they were the result of fraud or simple contamination or error. Wnt (talk) 17:40, 21 February 2011 (UTC)[reply]
Since they don't want to release some details due to issues regarding the patent application it can not be independently reproduced. Supposedly independent researchers where there and did "see where the hydrogen comes in from, test the amperage in the electric cables and so on." They did so using their own instruments but they were not allowed to look inside the devise. There are three explanations: 1. The observers was in on it in this case the fraud is very simple and uninteresting. 2. They were fouled, I think it is interesting to think about how this could be done. 3. The device do actually work as advertised. --Gr8xoz (talk) 22:53, 21 February 2011 (UTC)[reply]

Collapsing pyramids?

In referring to ground stone used somewhat as mortar, Egyptian pyramid construction techniques says the following:

The filling has almost no binding properties, but it was necessary to stabilize the construction.

Why would filling be needed for the construction to be stable? How possibly could a pyramid collapse, since it's not top-heavy and it's made of solid limestone blocks with no holes larger than a burial chamber? Nyttend (talk) 21:23, 20 February 2011 (UTC)[reply]

If no filler was used between roughly cut stones, it would be impossible to get each to be level, and this would become more exaggerated with each new level. Thus, the size of gaps would increase near the top, and this would allow water infiltration, which, if it froze (does it get below freezing there ?) would tend to drive the stones loose, over time. There's also probably the "comfort factor", that the didn't want dripping water to ruin the pharaohs' afterlives. StuRat (talk) 22:19, 20 February 2011 (UTC)[reply]
The article states ...stones forming the core of the pyramids were roughly cut..., which means that any upper course of stone blocks would only have had a few points of loadbearing contact to the lower course. The resulting uneven stress (receive load from top / transfer load to bottom) in any given block would have caused it to split and crumble, resulting in the gradual collapse of the entire face of a pyramid (which you can clearly see on some of them). A filler (binding or non-binding) has the simple purpose of distributing this load equally.
Compare this with you sleeping on a hard surface. Your entire weight will be resting on a few "protruding" bits, the head, shoulder blades, pelvis, etc. If you were to sleep on a beach - on a mattress of sand - a bit of wriggling about would evenly distribute your weight to the delight of Morpheus. --Cookatoo.ergo.ZooM (talk) 23:06, 20 February 2011 (UTC)[reply]
"Collapsing" here doesn't mean that they unfold and pour out the edges; it is more like mine subsidence (wow, that's a red link?). Even a multi-ton block of stone can break if you lay it on an uneven surface with half a pyramid on top of it, and the same applies to the one on top of it and so forth. Wnt (talk) 17:22, 21 February 2011 (UTC)[reply]
We have Subsidence#Mining, I'm not sure how to create a redirect yet, if i learn in the next 30 minutes i might fix it. Vespine (talk) 23:01, 21 February 2011 (UTC)[reply]
Well, THAT was easy!:)Vespine (talk) 23:04, 21 February 2011 (UTC)[reply]
Actually, is that actually the OPPOSITE of what mine subsidence means? Vespine (talk) 23:04, 21 February 2011 (UTC)[reply]

General Relativity and Conservation of Energy

I've heard energy isn't conserved under General Relativity. For example, cosmic background radiation is being redshifted due to the expansion of space with the energy going nowhere. Does the stress–energy tensor act as pretty much the same thing, or would it be possible to take advantage of this? If you can take advantage of it, how hard would it be to build a perpetual motion machine? Would it require a solar system? A black hole? — DanielLC 21:35, 20 February 2011 (UTC)[reply]

Redshift is not due to a loss of energy, it is due to a change of reference frame. Imagine a car of mass m driving past you at speed v; you say, the car's kinetic energy is . Now imagine you're sitting in the car. Now the kinetic energy is 0, because the speed v=0. Where did the energy go? Nowhere, you're just measuring it in a different reference frame. The fact that energy is not invariant to changes of reference frames is on the same level as time dilation and length contraction in special relativity. Conservation of energy always holds for local interactions like collisions or particle decay, if consistently described in the same reference frame. --Wrongfilter (talk) 22:01, 20 February 2011 (UTC)[reply]
I thought it's also partially due to the expansion of space. If a photon has a wavelength of 500nm, and space doubles in size, it will have a wavelength of 1000nm. — DanielLC 23:07, 20 February 2011 (UTC)[reply]
Change of reference frame in an expanding universe. Better? I should stress that I did not mean to imply that there is a one-to-one correspondence between the car example and cosmological redshift. I was just targetting the notion of energy not being conserved rather (that's what the example can do) than trying to explain what redshift is (it can't do that). --Wrongfilter (talk) 23:10, 20 February 2011 (UTC)[reply]
The problem with the expansion of space, is that it's the same everywhere. If you want to harness the energy, you have to have a difference in the expansion thus a harnessable "potential difference". Anyways, a perpetual motion machine built on this would be driven by an external force (cosmic expansion), and thus it's no longer a perpetual motion machine, rather a power cell rather like a solar cell. ManishEarthTalkStalk 13:26, 21 February 2011 (UTC)[reply]
I don't think redshifted light is losing energy. Remember, the light is redshifted because the universe expands. You may have light with 1/5 the frequency and 1/5 the energy as it had before, but meanwhile the universe has grown 5 times bigger and the same amount of light now covers a trail 5 times longer than before. I think... there are some aspects that confuse me. (For example, if a single photon is emitted from a source moving away from you at great velocity, do you still receive a single photon? does the redshifted photon split somehow? is the existence of the single photon at the far end sort of blurred into a quantum haze for you, like a cat in the box?) Wnt (talk) 17:49, 21 February 2011 (UTC)[reply]
1st: Yes, the volume increases and the energy density decreases but those two effects don't cancel each other completely. Defining as the expansion factor, the volume scales as and the photonic energy density scales as , so the energy scales as . 2nd: No, The photons don't split. What the OP is neglecting here is that since there is a change in volume and there is pressure, work is being performed and it is not surprising that there is a drop in energy. Dauto (talk) 18:12, 21 February 2011 (UTC)[reply]

Electrolysis of urine/Urine powered cars

I've read that performing electrolysis of urine produces far more hydrogen than water does.

http://www.nydailynews.com/lifestyle/health/2009/07/15/2009-07-15_urine_power_hydrogen_produced_from_urea_could_be_used_to_power_cars_houses.html http://www.wired.com/autopia/2009/07/pee-powered-cars/

These articles (among others) claim that one day we may be able to run our cars on hydrogen produced from the electrolysis of urine alone. Is this possible? I know that with water, you are putting more energy into electrolysis of water, than you are getting out of the hydrogen produced. But what about urine? ScienceApe (talk) 22:58, 20 February 2011 (UTC)[reply]

The electrolysis decompose urea (NH2)2CO to get hydrogen H2. Urine contains about 1% urea, a human would leave about 25 grams per day. I do not know the mileage but it would probably be worse than for gasoline so each person would be able to travel less than 250 meters per day. It can be a small contribution to the fuel supply but it will be very limited. Biogas from human feces would give a larger contribution. Gr8xoz (talk) 23:42, 20 February 2011 (UTC)[reply]
Let's make things clear. Hydrogen is a means to deliver energy, not a source of energy. Any method you chose to produce hydrogen is going to consume more energy than you get from the hydrogen at the end of the day. Dauto (talk) 02:11, 21 February 2011 (UTC)[reply]
Too bad. It would have been nice to start a long trip by buying a couple cases at BevMo rather than tanking up at Chevron. PhGustaf (talk) 02:41, 21 February 2011 (UTC) [reply]
It's not terribly uncommon for human waste to be turned into energy for use. The typical method is to let bacteria break poop down into biogas, and then using that for heating or cooking:[18], [19], [20]. Running a car on biogas is probably feasible, but I don't get the point of trying to break down urea for energy, a relatively small part of human waste, when we have all sorts of great fuel coming out our other hole. Buddy431 (talk) 03:03, 21 February 2011 (UTC)[reply]
In either case, I think the problem is that the total quantity of energy is low, relative to what cars need, so it wouldn't pay to set up an infrastructure to extract, store, and deliver such a fuel. It would make more sense to extract the energy at the waste treatment plant and use it directly there. StuRat (talk) 05:25, 21 February 2011 (UTC)[reply]
Of curse the processing will not be done in the car but biogas is already used in cars and buses on a rather large scale and hydrogen has often been proposed as a car fuel due to its high energy-content, it can also be used to produce liquid fuels. --Gr8xoz (talk) 12:01, 21 February 2011 (UTC)[reply]
LOL @ "Of curse".... yes it would be a curse to have your car storing manure while it decomposes into gas. I don't agree that bio-gas is used on a large scale now. There are a few experimental programs here and there, but you can't just drive up to your average gas station and expect to get bio-gas there. And if the source is as small as human (and even livestock) waste, it would never pay to set up such an infrastructure, at least in most places. A location that has lots of manure available, like a cattle feed lot, might want to use bio-gas for their tractors and trucks. StuRat (talk) 19:36, 21 February 2011 (UTC)[reply]
I do not know where you set the limit for "rather large scale", obviously it is not near the scale of gasoline but here in Sweden some cities run their city buses and garbage trucks on biogas. In each of the bigger cities there are a few gas stations with biogas that are open for private cars. I think the upgraded biogas and natural gas are feely mixed since they have nearly identical composition and the difference in buying the one or the other is only a difference in accounting. If you buy biogas someone have to put in the same amount of biogas in to the network, similarly to "green electricity". So the infrastructure are largely there already. There are more than 30000 gasdriven cars. I do think we use only a small fraction of all the usable biological waste to gas production so the potential are much bigger but it would not be nearly enough to provide all the fuel. See Tables of European biogas utilisation and Biogas --Gr8xoz (talk) 22:24, 21 February 2011 (UTC)[reply]

February 21

Conservation of (angular) momentum

I remember an argument Feynman used to show that the conservation of angular momentum implies the conservation of linear momentum. I've never seen this anywhere else, though, so is it possible that he was mistaken? The proof goes something like this:

Suppose we have a system of particles. Conservation of L doesn't depend on where the we choose the axis, so we can choose an axis that is very far from the all the particles. Then, all the particles will have the same position vector r relative to this axis, which doesn't vary with time. Then implies .

The only flaw with this argument that I can see is that r will be infinite, but I don't see how that invalidates the proof. Any insight? 74.15.137.130 (talk) 04:59, 21 February 2011 (UTC)[reply]

So basically he views the conservation of linear momentum as a special case of the conservation of angular momentum, where the radius is infinite. Sounds reasonable, to me. StuRat (talk) 05:20, 21 February 2011 (UTC)[reply]
Okay, after a little more thought, I realized that the reason that conservation of L w/ respect to one axis --> conservation of L w/ respect to all axes requires that the center of mass' velocity be constant, which is true iff momentum is conserved. So it's not all that surprising after all. 74.15.137.130 (talk) 06:42, 21 February 2011 (UTC)[reply]
If our choice of axes is one very far from all the particles, then r will be very large but it won't be infinite. It would be possible to do this with full mathematical rigor by taking the limit as r increases without bound. Some people would summarise this by saying the limit as r approaches infinity but increasing without bound is more rigorous than approaches infinity.
There is much in common between these two conservation laws. I assume Feynman was saying that if we have established the truth of one of these conservation laws then a consequence is that the other is also true. It doesn't matter whether we start by establishing the truth of the conservation of linear momentum, or angular momentum. Either way, it follows that the other is also true. Dolphin (t) 11:23, 21 February 2011 (UTC)[reply]
Noether's theorem connects symmetries to conservation laws. I'm just parroting the article here, but a rotationally invariant/symmetric Lagrangian implies conservation of angular momentum, while symmetry under a continuous translation in space implies conservation of linear momentum. I guess that would be a good starting point if you want to derive this more rigorously. EverGreg (talk) 12:45, 21 February 2011 (UTC)[reply]

Infinite universe and probabilities

My friend told me a theory which he had heard from someone who had read it from somewhere. It's been bugging me for a week now: I believe it must be false, but I just can't tell him exactly why (except that the "from someone/where" part is pure snopes.com material). The theory goes something like this: if the universe is infinite, then there are infinite possibilities how matter is organized. Thus, the probability that in some place else in the universe the matter is organized exactly like here is 1. This would mean that there are at least two copies of me around, sitting in this universe typing this post, although the other guy might have some atoms missing.

My answer to him was that at least the amount of matter in the universe can't be infinite. But is there a more accurate / scientific answer available? --Albval (talk) 07:37, 21 February 2011 (UTC)[reply]

Yes, and there are several variations on the theory. See multiverse.--Shantavira|feed me 08:00, 21 February 2011 (UTC)[reply]
This isn't actually about the multiverse - it applies equally well to a single universe operating under known laws of physics. Back to the question, while the amount of mass in the visible universe must be finite, there is no theoretical limit on the total mass of the entire Universe; it may well be infinite (see shape of the universe for speculation that touches on the potential size of the universe). Someguy1221 (talk) 08:34, 21 February 2011 (UTC)[reply]
The probability of something happening may be 0, (impossible) in which case it still does not happen even if there is an infinite chance of it. Graeme Bartlett (talk) 09:48, 21 February 2011 (UTC)[reply]
Graeme, I was just about to respond to Albval on this point; I thought you would know better. Probability zero does not mean impossible, and probability one does not mean certain. See our almost surely article. --Trovatore (talk) 09:50, 21 February 2011 (UTC)[reply]
Forgive me for being uncivil, but the "almost surely" article is patent nonsense. An event with probability exactly equal to 1 is identical to an event with probability and is exactly guaranteed to occur. The "corner-cases" mentioned in the "almost surely" article deal exclusively with numerical models of statistical systems. I think some statisticians need a formal retraining in measure theory, basic mathematical limits, and the finite precision capabilities of numerical computers. The example case of the "dart" landing on an abstract concept (a "line") is merely a restatement of Zeno's paradox (something to the effect of never being able to get arbitrarily close to the abstraction of the imaginary line). See also, treatment of infinity in computers. This is a matter of axiomatic definition: an event with probability zero will not happen. Nimur (talk) 20:32, 21 February 2011 (UTC)[reply]
No, you're just wrong. Consider an infinite sequence of fair-coin flips. For any fixed ω-sequence of heads and tails, the probability that the actual sequence will match it is zero (so the probability that it will not match it is one). However, if every actual sequence were impossible, then it would be impossible for the sequence to come out any way at all, which is nonsense.
It is true that these concepts don't correspond to physical experiments that are easy to design, but they are absolutely fundamental to probability theory as it is modernly conceived. There are people who don't like them, but they have basically lost. You're the one who should learn measure theory. --Trovatore (talk) 22:58, 21 February 2011 (UTC)[reply]
I think the theory is correct. --Gr8xoz (talk) 12:27, 21 February 2011 (UTC)[reply]
The source: This exact theory, which relates to our universe under the assumption of inflation in an infinite universe, was put forward in 2001 by Alexander Vilenkin of Tufts University in Medford, Massachusetts and Jaume Garriga of the Independent University of Barcelona. Inflation guarantees that the different regions are isolated. The time elapsed since the big bang determine the size of each region (how far we can see in the universe).
As for "exactly like here", Vilenkin and Garriga argued that there are only a limited number of distinct universe configurations in the infinite universe. While say the distance between two atoms may be a continuous variable, 'quantum blurriness' means that two distances in two worlds can't be told apart if they'r too small. Vilenkin and Garriga estimated that there were only 10^10150 distinct histories in the universe.
Just because something is infinite, doesn't necessarily mean that every possible configuration has to happen in that infinity. A simple example is the decimal expansion of π which is known to be infinite, but it is not known whether it contains seven consecutive sevens (for example). See Trovatore's link above. Dbfirs 14:36, 21 February 2011 (UTC)[reply]
The decimal expansion of π is infinite, but it is not a random sequence of digits (because it can be encoded in a finite number of symbols, a series or 4 atan(1)). Whether that is true of the Universe is an interesting question but I guess it is metaphysical rather than physical. The probability of finding a copy of OP in the observable universe is 0 or virtually 0. --Wrongfilter (talk) 15:23, 21 February 2011 (UTC)[reply]
Er, Random#In mathematics says it's likely that π is random in some senses. I think you're taking an information-theory approach, not a math/statistics approach? But also, how would one write a finite expression to generate the numerical represenatation of pi? To get the digits, you have to start doing actual iterations of the expansion (and an infinite number of them to get the infinite-decimal-places string), not just say "in the theoretical limit, it is symbolically exactly equal to pi" I think the best you can say is that the whole value is something like non-arbitrary or that it's reproducible (I'm trying to pick words that don't have subtly different technical meanings than in lay-language), not that the string is a "non-random sequence of digits". DMacks (talk) 17:19, 21 February 2011 (UTC)[reply]
The digits of pi are absolutely not random. For example, the first one is always a 3. When people say "the digits are random", they (if you really pin them down) usually mean that pi is a normal number, which is only a conjecture. Staecker (talk) 01:10, 22 February 2011 (UTC)[reply]
It's "only a conjecture", but it would be astonishing if it were false. Mathematicians are extremely careful to separate what has actually been proved from what has not, which is a useful trait, but it sometimes causes outsiders to think that there is reasonable doubt about propositions when there really isn't. --Trovatore (talk) 01:59, 22 February 2011 (UTC)[reply]
In an generic infinite universe, not only would there be an infinite number of copies of you (actually of the entire visible part of the universe), but you also cannot identify yourself as any single copy. This is due to the fact that you have a limited amount of information about yourself. So, e.g., if you are not bald and assuming that you haven't counted exactly how many hairs you have on your head, there will be different versions of you, each with a different amount of hair and they all share the same personal identity. This despite the fact that the distance between the copies is astronomical, as Tegmark points out here. Count Iblis (talk) 16:35, 21 February 2011 (UTC)[reply]
One thought experiment that might help to understand this is "if there were an infinite number of monkeys randomly typing away on typewriters, they would eventually reproduce all the works of Shakespeare". You might think this is impossible too, but here would could actually calculate how many it would take. Let's say we just want to see the word "HI" (ignoring case) and that there are 100 keys on the typewriter (a typewriter with fewer extra keys would obviously give better odds, though). For two letters, it should take, on average, 100² or 10,000 monkeys (or one monkey with that many trials). For a 50 letter sentence, it would take 10050 monkeys. If we assume Shakespeare's entire work is 10 million letters long (if anybody has a better figure than this guess, please let me know), that should take 10010,000,000 monkeys. That's a very large number, but definitely less than infinity. StuRat (talk) 19:27, 21 February 2011 (UTC)[reply]
If anyone want to try this at home I really recommend implementing the standard RFC 2795. When running large experiments it is important to follow relevant open standards to avoid vendor lock in. I also think it would be a good idea to run it on RFC 2549 in in order to increase the biological diversity. Unfortunately RFC 2460 is way to limited to be used by it self. (Why do they never learn that you should not build standards with arbitrary limits) --Gr8xoz (talk) 21:35, 21 February 2011 (UTC)[reply]
I take it that site is just an elaborate hoax ? StuRat (talk) 22:35, 21 February 2011 (UTC)[reply]
*blink* Um, IETF? No, they are the standards-body for pretty much everything that lets your computer interact with wikipedia's servers. DMacks (talk) 22:41, 21 February 2011 (UTC)[reply]
Well, not the entire works but you need look no further then the Infinite monkeys article. Even if the observable universe were filled with monkeys typing from now until the heat death of the universe, their total probability to produce a single instance of Hamlet would still be less than one in 10183,800.... but not zero! lol. Vespine (talk) 22:59, 21 February 2011 (UTC)[reply]
You might want to take note of the publication dates for RFC 2795 and RFC 2549. Red Act (talk) 23:17, 21 February 2011 (UTC)[reply]
As far as I know, though, there's no RFC for IP over demographics. --Trovatore (talk) 23:27, 21 February 2011 (UTC) [reply]
No, they are actually the main standardization organisation for the internet protocols but they actually have humour, it is tradition to release a joke standard on the Aprils fools day. This is the case for RFC 2795 and RFC 2549. We even has an article about this tradition April Fools' Day RFC This started when the internet was very much smaller than to day, the very first joke standard was written when there was less than 300 computers on the Internet. RFC 2460 is a real serious standard for IPv6 the internet protocol that will replace IPv4. IPv4 has only 2^32=4.3*10^9 addresses and that will not be enough for all our devices in the future so IPv6 has improved this to 2^128=3.4*10^38. It is often joked about how huge this address space is and that it should be enough for all civilisations in the universe. --Gr8xoz (talk) 23:45, 21 February 2011 (UTC)[reply]

Diving: max. depth without minding

How deep can you dive without caring about your security? —Preceding unsigned comment added by 212.169.187.5 (talk) 13:44, 21 February 2011 (UTC)[reply]

Just to clarify, do you mean how deep should the water be in a swimming pool before you should dive into it, or how deep can you go when SCUBA diving? Googlemeister (talk) 14:19, 21 February 2011 (UTC)[reply]
Either way, the answer to the question depends on how stupid the diver is. Dauto (talk) 14:23, 21 February 2011 (UTC)[reply]
I meant SCUBA + free-diving. And no, the answer do not depend on how stupid the diver is. The question is how stupid the diver can be, without consequences. 212.169.177.33 (talk) 14:31, 21 February 2011 (UTC)[reply]
May be that's what he wanted to ask. But that's not what he asked. Dauto (talk) 14:51, 21 February 2011 (UTC)[reply]
Our article on Deep diving has some recommendations. Dbfirs 14:43, 21 February 2011 (UTC)[reply]
It's not just the depth, but how fast you reach this depth, how long you remain there and how fast you come back to the surface. The type of breathing gas you use is also relevant here. The answer for SCUBA is simply that you'll go through some certification program and keep the rules that you learned there. Quest09 (talk) 15:03, 21 February 2011 (UTC)[reply]
Free-diving is not limited by risk of bends (there is none) or breathing gas. Not sure what it is limited by, other than the ability to hold ones breath, but the no limits world record is 214 metres according to our article. SpinningSpark 18:09, 21 February 2011 (UTC)[reply]
As stated above, free-divers don't get bent. In SCUBA diving, this question is not answerable as asked, because you could get bent, theoretically, even diving to a 10 foot depth if you stay down long enough. Looking at the dive tables isn't actually helpful; the NAUI dive table doesn't start until 40 feet, but this is because people don't really do 20-foot dives, not because you're immune to the bends if you only do 20-foot dives. The question could be answerable if you included a time limit. Comet Tuttle (talk) 18:33, 21 February 2011 (UTC)[reply]
There have been cases as low as 6' deep getting the bends if the diver is there for hours and the dive was at a mountain lake. Googlemeister (talk) 22:15, 21 February 2011 (UTC)[reply]
The bends is not the only risk. Even in a short dive to 20 feet, it would be easy for a novice diver to accidentally inhale some seawater, panic, and shoot to the surface without remembering to breath out, which can do significant damage to the ears or lungs. There is really no type of scuba dive where the diver can be stupid without consequences. Looie496 (talk) 19:14, 21 February 2011 (UTC)[reply]
Agreed; I am assuming the original poster is asking about decompression sickness. Obviously a 1 inch dive could be fatal if you stupidly breathe in all the water possible. Comet Tuttle (talk) 19:43, 21 February 2011 (UTC)[reply]
"The bends" is one of many scuba-diving hazards. In addition to "the bends," decompression sickness covers more of the hazards related to re-gassification of dissolved nitrogen. Other risks for ordinary-air SCUBA diving include oxygen toxicity, and nitrogen narcosis - both of which can occur while still under water. Proper SCUBA training will teach you how to correctly account for each of these risks. As has been mentioned, the "maximum depth" is a complicated factor - it depends on how long you stay at each depth, and how long since your previous decompression or surface trip. Divers use a dive table or dive computer to assist them in calculating safe dive depths and dive profiles - because "safety" is not described by one single depth value. If you breathe a gas other than air, such as nitrox or trimix, or if you are a technical diver and breathe multi-gas out of various cylinders in sequence, other hazards can exist. If you are an underwater construction expert, you may dive a profile that is considered "unsafe" by recreational standards. If you are a Navy Seal, you may be ordered (or choose) to use a dive-profile that could be fatal (but Navy SEALs understand that risk of fatality is a part of their job). If you dive at altitude, such as in Lake Titicaca, you run additional interesting risks related to pressurization; divers typically undergo a specific training for "altitude diving" such as the NAUI Recreational Altitude Diver course. If you will be diving out of a submarine or hyperbaric chamber, your safe-dive profile is very different than a surface scuba dive. Nimur (talk) 20:08, 21 February 2011 (UTC)[reply]
In fact, we have an entire article: Maximum operating depth. Needless to say, "don't try this at home" - of course, technical dives require special training and equipment. Nimur (talk) 20:11, 21 February 2011 (UTC)[reply]
Has anyone tried to find a way to allow exchange of oxygen while free-diving, so that it would have the advantages of scuba? I mean, I think in theory you should be able to stick a pair of tubes down into someone's lungs, or even (maybe) provide oxygen via someone's dialysis shunt? A way to provide only the truly necessary amount of oxygen while leaving the lungs collapsed? Wnt (talk) 21:01, 21 February 2011 (UTC)[reply]
No, I'm not aware of any such technology. It would be incredibly hazardous to attempt to exchange gas at below the ambient water pressure. (Gas will not flow against the pressure gradient; you have to depressurize the lungs below ambient pressure to "suck" air in. That depressurization, which we normally call "inhaling", would be the same as "crushing" if ambient water pressure was very high; alternatively, your hose can "squirt" over-pressured air into the lungs, but the risk of hyperinflation is incredibly hazardous and defeats the point of a free-dive. For these reasons, a SCUBA regulator regulates the breathing gas pressure to ambient water-pressure. If for any reason you do not want the human to be exposed to ambient water pressure (such as when you dive to great depths), you use a pressure hull to isolate their atmosphere. A rigid diving suit can do this - but no such suit can withstand great depths. Most good pressure hulls look more like a submarine - sort of spherical or cylindrical, so there are no regions where force is concentrated (so the material won't crumple). Then, the "ambient" pressure inside the hull can be maintained far below ambient water-pressure. Submarine atmospheres are often kept higher than 1 atm, but still at a safer level than the water pressure. Take a look, for example, at bathyscaphe or bathysphere. I believe the crew of ALVIN breathes at atmospheric pressure - but they have two inches of solid titanium to keep the water out. Nimur (talk) 21:25, 21 February 2011 (UTC)[reply]
I don't know if anything like what I'm suggesting has been attempted, but I don't think it's that impossible. If you have a sack of oxygen in equilibrium with the external water pressure, then just a slight extra pressure should send it into the lungs; likewise the return tube should carry out air with just a slight pressure. Of course, tubes to the lungs (at any pressure) might have safety issues of their own, but I don't see why there has to be a hazardous amount of crushing or exploding involved. (The 'sack' might still have some minimum pressure it holds at the start, and use a regulator at such low pressures, to limit its buoyancy near the surface). Wnt (talk) 01:49, 22 February 2011 (UTC)[reply]
If you regulate the pressure, you have SCUBA! And if you don't regulate the pressure, it won't work. You might be underestimating the crushing-force that 10 meters of water can exert; you physically will not be able to inhale unpressurized air. It will be prohibitively difficult for you to "breathe in" air that is pressurized at 1 atm when your body is surrounded by 2 atm of pressure - your diaphragm musculature just isn't strong enough to work across that pressure gradient. (In other words - as you try to "breathe in" by expanding your thoracic cavity with your diaphragm muscle, the water will literally crush your lungs and your thoracic cavity back to the original shape). This is exactly the reason why you can't get away with a super-long snorkel: you need the air to be pressurized very close to ambient water-pressure so that the air pressure inside your lungs helps your diaphragm work, pushing back against the crushing-force of the water pressure. (Here's a "How Stuff Works" post explaining in more detail). Let me give you another example of exactly how much crushing you are subject to while diving: when you're down at even mild depth, say 50 feet, your weight belt (which was tightly fitted at the surface) will be very loose and often requires re-adjustment; and if you tighten it at the bottom, you'll find it uncomfortably tightens and squeezes your waistline pretty hard when you re-ascend. The weight-belt, usually made of nylon, barely changes size at all - but both your wetsuit and your body have been significantly and measurably "shrunk" by the massive compressive force of ambient water pressure. Nimur (talk) 03:26, 22 February 2011 (UTC)[reply]
I was snorkeling once and saw a shell, what must have only been 3 meters below me. I'm a pretty good swimmer but I've never had any diving training and never been deeper then the bottom of a pool. It took about 5 attempts to get down that deep and I regretted it when I got back up, my ears hurt quite badly for about an hour. It was a lot more taxing and dangerous then I suspected, so I don't recommend it unless you have an experienced diver give you some training. Vespine (talk) 22:03, 21 February 2011 (UTC)[reply]

DNA and RNA

Would the complimentary messenger RNA strand of a coding DNA strand, be an exact copy of the coding DNA strand, only with the execption of replacing T(thymine) with U(uracil)? Thanks.

for example

ATCGAATT dna coding strand


AUCGAAUU RNA messenger —Preceding unsigned comment added by 99.146.124.35 (talk) 14:58, 21 February 2011 (UTC)[reply]

Yes, as explained in Coding strand. Assuming no mutations occur, the RNA messenger will be the same except for the Uracil/Thymine. Chipmunkdavis (talk) 15:06, 21 February 2011 (UTC)[reply]

If an extra A is added after ATC in the DNA messenger strand, It would change many of the animo acids in the sequence. What exact processes would cause this mutation to happen? —Preceding unsigned comment added by 99.146.124.35 (talk) 15:26, 21 February 2011 (UTC)[reply]

Non sense mutation

Would

ATC-GAA-TTC-CGA-CCA-TGC... (non mutated dna)

ATC-AGA-ATT-CCG-ACC-ATG-C... (mutated added extra A)

Would it be consitered a nonsense mutation, Frameshift mutation or a missense mutation? —Preceding unsigned comment added by 99.146.124.35 (talk) 16:25, 21 February 2011 (UTC)[reply]

The example given is a frameshift mutation. You could get a frameshift in the RNA relative to the DNA by translational frameshift or RNA editing. Wnt (talk) 17:27, 21 February 2011 (UTC)[reply]

how much of an effect do daily multivitamins really have?

how much of an effect does taking a daily multivitamin supplement (like Centrum or whatever) every day really have on one's health? How would it compare to, say, cutting out one bad thing you do (go to mcdonald's, or not exercise enough). I mean, is it really a drastic effect, if you take those properly for the rest of your life, or more like accredited homeopathy. nobody really knows if it does anything, but why not? 109.128.213.73 (talk) 19:18, 21 February 2011 (UTC)[reply]

It's not correct to characterize homeopathy as "nobody really knows if it does anything". Homeopathy is known scientifically to be useless, and there are a lot of references in the homeopathy article to back that statement up. People buy that stuff because of cultural, psychological and marketing reasons, not because there's any scientific uncertainty about it being nonsense. Red Act (talk) 20:54, 21 February 2011 (UTC)[reply]
Forget something? Thank you for understanding my analogy. I've marked where you forgot to answer my question by applying the analogy to what I'm asking about. 109.128.213.73 (talk) 21:35, 21 February 2011 (UTC)[reply]

It depends drastically on the individual. There's no question that there are people with serious vitamin deficiencies, and also no question that many people have none at all. Wnt (talk) 19:25, 21 February 2011 (UTC)[reply]

Though the benefits of daily multi-vitamin pills can be (and are often) exaggerated, please do not confuse it with homeopathy. Unlike the homeopathy, the effects of vitamins, particularly vitamin_deficiency are well documented by the scientific community. SemanticMantis (talk) 19:40, 21 February 2011 (UTC)[reply]
Well, see our Multivitamin article; there are "evidence for" and "evidence against" sections. Comet Tuttle (talk) 19:42, 21 February 2011 (UTC)[reply]
My understanding is that the multi billion dollar "supplement" industry is mostly, quite literally "pissed" away (excuse the profanity). Unless you have a pregnant or have a deficiency, which most people in the developed world don't, there is not much evidence that taking supplements has any benefit. In fact, there is some evidence to suggest some supplements, specifically anti-oxidants, might actually have a negative effect. Vespine (talk) 22:04, 21 February 2011 (UTC)[reply]
A multi-vitamin is no substitute for a healthy diet. They only contain a portion of the healthy nutrients which we know about.
Also, some of the nutrients they contain may be slightly different formulations which may not be bioavailable, or may need to be taken in conjunction with certain foods to be absorbed. Furthermore, you can potentially overdose on some vitamins, and having them in pill form can make that easier to do.
Finally, if you use your multi-vitamin as a justification for continuing to eat junk food, keep in mind that the pills do nothing about the excess calories, sugars, animal fats, trans fats, bad cholesterol, and sodium you are getting.
So, eating a good diet will provide you with all the nutrients you need, in a form you can use, including things like fiber, which are difficult to give in pill form, due to their bulk. StuRat (talk) 22:23, 21 February 2011 (UTC)[reply]

Some dietary supplements are critical for optimum health. Count Iblis (talk) 00:18, 22 February 2011 (UTC)[reply]

I'm not sure how you reach that conclusion from the linked reference. Vespine (talk) 00:26, 22 February 2011 (UTC)[reply]
Last line of first paragraph:

As director of LPI, I am often asked what supplements I take—after all, thinking about and researching micronutrients every day, I should know what dietary supplements are most important. While I think eating a healthy diet, exercising regularly, maintaining a healthy body weight, and avoiding tobacco are of utmost importance to maintain good health, I also think that some dietary supplements are critical for optimum health.

Count Iblis (talk) 00:46, 22 February 2011 (UTC)[reply]
Sorry, are you kidding or? Just because someone thinks it, doesn't make it so. Linus Pauling is well known for his controversial views on vitamins and megadosing so you might want to read the claims coming from his eponymous institute with a low dose of sodium. Vespine (talk) 01:34, 22 February 2011 (UTC)[reply]

Cold

Cold slows down chemical processes, so why would that not apply to my own body? Even if it slows down only some of my cells some of the time, I'm sure that adds up to save my body some body life-cycle units (I'm not a biologist) in the long term. I don't know how it works, sorry, that is why I'm asking. I looked up stats and northern countries seem to have longer average life spans, and northern US states seem to have longer average lifespans. I realize there are many factors, such as availability and quality of health-care to be factored in, so those numbers alone are not good enough. —Preceding unsigned comment added by 88.160.92.233 (talk) 21:34, 21 February 2011 (UTC)[reply]

Your body is very good at regulating core temperature. In most of the places where it counts (i.e. internal organs, brain) your temperature does not vary much at all (just a few degrees) due to external temperature.Vespine (talk) 21:54, 21 February 2011 (UTC)[reply]
More like a few tenths of a degree. StuRat (talk) 22:08, 21 February 2011 (UTC)[reply]
If you could keep someone alive while their core body temperature is lowered, then you could theoretically extend life. (However, when you hit freezing, their cells all explode, and this certainly doesn't help extend life. ) This might actually be a reasonable medical treatment at some point in the future, if we can figure out how to make people hibernate, like many other mammals. This could keep someone alive, say, until a cadaver heart could be delivered from a pro-democracy demonstrator in China. :-) StuRat (talk) 22:08, 21 February 2011 (UTC)[reply]
Hypothermic people have been able to in some cases go without breathing for much, much longer then usual, our article suggests as long as 1 hour, but as the mortality from that particular situation runs 38-75%, I wouldn't want to try it. Googlemeister (talk) 22:12, 21 February 2011 (UTC)[reply]
Our Normal human body temperature article actually states In adult men and women the normal range for oral temperature is 33.2–38.2 °C so normal variation is about 5 degrees, but yes in a individual the normal range appears to be about 0.7 of a degree. I'm not sure if at all that correlates with the climate a person lives in. Actually, that reminded me that this "slowing down" is done during some surgeries, like Cardiopulmonary bypass surgery: hypothermia is maintained; body temperature is usually kept at 28ºC to 32ºC (82.4–89.6ºF). This gives the surgeons a lot longer to perform the surgery while lowering the risk of damage to other organs due to lower blood pressure or oxygen availability. But it's certainly not a state you would like to be in while conscious. Vespine (talk) 22:50, 21 February 2011 (UTC)[reply]

Plants

Are there any plants that photosynthesis in moonlight, or entirely in moonlight and never sunlight? — Preceding unsigned comment added by K4t84g (talkcontribs) 21:37, 21 February 2011 (UTC)[reply]

I doubt it. First, the visible light from the Moon is orders of magnitude less. But, more importantly, there is virtually no UV in moonlight, which is what plants tend to use for photosynthesis. StuRat (talk) 22:03, 21 February 2011 (UTC)[reply]
According to Howard Griffiths of Cambridge University, "probably no". In fact, he states that plants avoid moonlight, speculating that it might disrupt their circadian rhythm. Clarityfiend (talk) 22:08, 21 February 2011 (UTC)[reply]
Plants don't tend to use UV for photosynthesis, see for example this graph and Simarouba amara#Physiology where there is some data on the wavelengths that chlorophyll and leaves absorb - most of it is the visible spectrum. CAM plants (like cacti) are as close as you'll get to photosynthesis in the dark, but they only temporary fix CO2 to use in photosynthesis the next day. SmartSE (talk) 00:48, 22 February 2011 (UTC)[reply]

Deeply ingrained superstitions...

I'm sure that I'm not the only person who finds himself in this situation sometimes - that is, knowing in my head that a certain superstitious behaviour/reaction is pure bunk, but still getting a deep-seated feeling of 'wrongness' in the gut if one does not act in the accepted manner in a situation where folklore suggests that bad luck may result.

For example. I see one magpie, I look around for another one. If there is only one magpie present, I salute the bird and verbally acknowledge its presence.

Or never, ever, under any circumstances walking under a ladder. Even if it means going out of my way.

Or feeling slightly uneasy if the numbers 13 or 666 come up in random situations and taking steps to avoid prolonged exposure to the number.

Just wondering if there is a name for this conflicted feeling? Thanks. --95.148.108.189 (talk) 21:56, 21 February 2011 (UTC)[reply]

If you want fancy words to describe this scenario, you might say that you "disbelieve the superstitions, but the prevalence of these cultural presuppositions amongst your peer group has had a normative social influence on you, and you willingly suspend rational analysis in certain circumstances." You could also use the term "cognitive dissonance" to describe the conflict between rational and irrational thought. Nimur (talk) 22:02, 21 February 2011 (UTC)[reply]
Sometimes conscious understanding of an illusion "breaks the spell" but there are lot's of illusions that are not dissipated with conscious knowledge. Sometimes you have to try really long and hard to break the habits you've been raised with and sometimes you will never truly rid yourself of them. I heard once that there have been "superstition parties" where people go to break mirrors and walk under ladders etc.. to try to clear them selves of silly beliefs / habits, but i can't find any reference off hand. Vespine (talk) 22:39, 21 February 2011 (UTC)[reply]
Having been brought up without these superstitions, I have no problems with 13 or broken mirrors or ladders (though I do check that nothing is likely to fall on my head before I walk underneath). In the past, I have had to fight (mentally) to avoid being "infected" with these superstitions from friends who do seem to believe in them. The cultural transfer seems to be surprisingly strong. Dbfirs 00:12, 22 February 2011 (UTC)[reply]
There actually is a word for it: compulsive. In obsessive–compulsive disorder these feelings become so strong that they can make life miserable, but a great many people show milder forms of obsession or compulsion. Looie496 (talk) 00:52, 22 February 2011 (UTC)[reply]

Gasoline Octane - can you FEEL the difference when you drive?

So my buddy fills up the tank of his crappy, old Chinese-brand automobile yesterday and promptly declares that switching from 94 to 97 octane fuel has made a tremendous difference in the way it runs. I find this hard to believe and suggested it was a bit of bias due to the higher price he paid. He insisted he was correct. He's agreed to blind test octanes over several tanks of gas, but that's going to take awhile. In the meantime, I'm asking here: can you feel the difference between octanes when you drive, particularly between such a small change as 94 to 97? I hypothesized that a high performance vehicle might run differently, but a beater like his wouldn't know 97 octane if it bit it in the bumper. There's just too many other inefficiencies in the engine for that to bubble to the top as a defining quality. What says the RefDesk? The Masked Booby (talk) 22:05, 21 February 2011 (UTC)[reply]

Higher octane is not "better" fuel - it is "chemically different fuel that burns with different parameters." If your engine is not designed for high octane, you will obtain worse performance by filling up with it. A common symptom is engine knocking. See our article's section: effects of octane rating. Nimur (talk) 22:07, 21 February 2011 (UTC)[reply]
You won't get any knocking by using higher octane gasoline. It is just a waste of money. Dauto (talk) 22:25, 21 February 2011 (UTC)[reply]
High octane gasoline typically has a lower calorific value than the normal stuff. So it is possible the he would see an unmeasurably slight deterioration in performance and/or economy. However there are all sorts of possibilities. If he's happy paying for 97 octane, why not? In my own car using premium gets rid of a slight knock on acceleration, makin the engine seem smoother. Frankly i am not prepared to pay extra get rid of it. Greglocock (talk) 22:37, 21 February 2011 (UTC)[reply]
I once fell victim to the claim that high octane gasoline "helps clean out an engine". What it actually does is make the CHECK ENGINE light come on. Wnt (talk) 22:53, 21 February 2011 (UTC)[reply]
The octane rating has to do with the requirements of high-performance, high-compression engines or turbocharged engines to fuel that isn't subject to pre-detonation ("knock") in the compression stroke, which can damage the engine. The fuel is less volatile (in a different sense from its tendency to evaporate) under compression than "regular" fuel, and, as pointed out, can have less energy content. It has no value at all in an engine that doesn't specifically require the higher rating. It isn't better, regardless of the silly "premium" marketing, it's just suited for specific engines that need it to perform their best. If the engine isn't designed to demand high octane ratings, you're wasting money and buying lower-performing fuel at a higer price. Acroterion (talk) 02:08, 22 February 2011 (UTC)[reply]
One kind of obvious point that no one seems to have mentioned is that it could be that your friend's car pings at 94 octane, but not at 97. Yes, he would notice that difference, for sure. It seems a little unlikely because it's usually very high-compression engines that want the very high octanes. On the other hand, I believe that old, dirty engines are more prone to pinging, so it's not totally beyond the realm of belief. --Trovatore (talk) 02:14, 22 February 2011 (UTC)[reply]