Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m →‎Coin nominal values: fixed Kainaws submission for readability.
Line 252: Line 252:
Why is it not good to have both 20 cent and 25 cent coins in a single currency? --[[Special:Contributions/84.61.176.167|84.61.176.167]] ([[User talk:84.61.176.167|talk]]) 15:33, 4 February 2011 (UTC)
Why is it not good to have both 20 cent and 25 cent coins in a single currency? --[[Special:Contributions/84.61.176.167|84.61.176.167]] ([[User talk:84.61.176.167|talk]]) 15:33, 4 February 2011 (UTC)


:This is a classic example of the [[greedy algorithm]] when it comes to making change. If you want to give 40 cents in change and you have standard American currency of 1, 5, 10, and 25 cents, the least number of coins will be 1x25, 2x10. If you add a 2 cent coin, the least number of coins will be 2x20 - which requires use of something other than the greedy algorithm. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 15:36, 4 February 2011 (UTC)
:This is a classic example of the [[greedy algorithm]] when it comes to making change. If you want to give 40 cents in change and you have standard American currency of 1, 5, 10, and 25 cents, the least number of coins will be 1x25, 2x10. If you add a 20 cent coin, the least number of coins will be 2x20 - which requires use of something other than the greedy algorithm. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 15:36, 4 February 2011 (UTC)


:You have to define what you mean by "good". You could have a coin in every denomination from 1 cent to 99 cents, and then you could always make change with a single coin. But that benefit would be outweighed by the problems of having 99 different types of coins to keep track of. At the other extreme you just have a 1 cent coin and it would take up to 99 to make change. There are other factors such as people find it easier to do arithmetic with multiples of 5 and 10. With various conflicting objectives, some of which are cultural, the question doesn't make sense mathematically. You could ask a more specific question like "Given there are to be 4 denominations, what should they be to minimize the average number of coins needed to make change?" Then you might then be able to show there is no solution that uses both a 20 and 25 cent coin.--[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 17:27, 4 February 2011 (UTC)
:You have to define what you mean by "good". You could have a coin in every denomination from 1 cent to 99 cents, and then you could always make change with a single coin. But that benefit would be outweighed by the problems of having 99 different types of coins to keep track of. At the other extreme you just have a 1 cent coin and it would take up to 99 to make change. There are other factors such as people find it easier to do arithmetic with multiples of 5 and 10. With various conflicting objectives, some of which are cultural, the question doesn't make sense mathematically. You could ask a more specific question like "Given there are to be 4 denominations, what should they be to minimize the average number of coins needed to make change?" Then you might then be able to show there is no solution that uses both a 20 and 25 cent coin.--[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 17:27, 4 February 2011 (UTC)

Revision as of 17:44, 4 February 2011

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 29

ZFC and its implications

As the modern foundation of mathematics, is it possible to prove all provable mathematical statements from ZFC? And if so, doesn't that mean that other axioms like the Peano axioms are not true axioms because they may be derived from ZFC? 220.253.245.51 (talk) 08:18, 29 January 2011 (UTC)[reply]

No, that's far too fundamentalist an approach. Axioms are not objectively true or false; they're something you decide yourself whether to work with or not. How it works, roughly, is that mathematicians have some fuzzy, intuitive idea of what they think should be provable, and then they try to construct axiom systems that allow that to be proved without also proving something that the mathematician strongly feels should not be provable, such as 0=1. Doing so is not easy, because there is no guaranteed way of making sure that your axiom system doesn't prove something absurd -- but it's the best we have.
The fact that Peano's axioms can be derived from ZFC (or more formally, that they can be proved in ZFC to be consistent) does not make the Peano system lesser than ZFC. If anything, it makes Peano more trustworthy than ZFC because if Peano should out to be inconsistent, then goes ZFC too.
When we get close to such foundational issues, we hit an area where intelligent, insightful mathematicians can and do disagree about what is reasonable. Most mathematicians probably do think that there is such a thing as fundamental, objective truths about the natural numbers (and that the Peano axioms are among those truths, so we can trust things proved from them). But as soon as we get to sets -- particularly infinite sets -- some uncertainty creeps in. It is practically inconceivable that a contradiction exists among the Peano axioms, but a contradiction in ZFC is merely very unlikely -- it would be a tremendous surprise and leave mathematics in disarray, but it's not going to drive anybody crazy from the knowledge, and many have thought about how we'd go about rebuilding everything.
It is also not true that ZFC allows one to prove everything one would like to prove -- in particular, category theory would be much nicer and smoother if you could speak about things such as "the set of all possible complex vector spaces", which are not allowed to exist in ZFC. There are various ways to side-step this issue within ZFC (somewhat kludgy but mostly technically adequate), and some proposed axiom systems different from ZFC that would be easier to do category theory in (such as Quine's NF, but there is much less trust in those being contradiction free than for ZFC). –Henning Makholm (talk) 10:31, 29 January 2011 (UTC)[reply]
I would agree with some aspects of the above but not all of it. Most mathematical realists believe that the axioms of ZFC are objectively true in their intended interpretation, which is given by the von Neumann hierarchy. However they are far from all that is true in the von Neumann hierarchy. In particular large cardinal axioms should in general be true.
You need very minor large cardinals (say, a proper class of inaccessible cardinals) to work one of the kludges Henning is referring to (the so-called Grothendieck universes). Those are pretty harmless in terms of consistency; hardly anyone expects them to result in a contradiction.  ::There's more controversy about the more interesting ones, say measurable cardinals or Woodin cardinals or supercompact cardinals. In some sense they are more interesting precisely because they "take more risk" of being inconsistent. This fits in well with Popperian falsificationism. --Trovatore (talk) 09:52, 30 January 2011 (UTC)[reply]
I was trying to avoid restarting the unproductive shouting match about who gets to call himself a "realist", by setting a low bar for what everyone is supposed to agree on. :-) As for creeping doubt, I can only say that I personally feel deeply in my liver and spleen that the Peano axioms must be right, and reserve the right to go raving mad with ontological vertigo if they are demonstrated to be inconsistent -- whereas I simply cannot muster the same degree of visceral certainty for the proposition that ZFC's power and replacement axioms are safer than Cantorian set comprehension. I do pragmatically believe the latter, but more due to the failure of all efforts to falsify it experimentally than because it intuitively must be so. –Henning Makholm (talk) 10:26, 30 January 2011 (UTC)[reply]
FWIW I apologize if I was a participant in the shouting match you're referring to. I will try, as best I can, to move 2-person discussions off the reference desk in the future. — Carl (CBM · talk) 13:14, 30 January 2011 (UTC)[reply]
Apology accepted, though I'm quite sure none was due. Yes, I thought that debate was tediously going in circles, but we might as well close the refdesk down if anyone had a right not to see conversations they found tedious. –Henning Makholm (talk) 13:29, 30 January 2011 (UTC)[reply]
Have to call you out on another issue there — the "Cantorian set comprehension" thing. I take it you are taking the position that Cantor believed in unrestricted comprehension. That is not in fact clear at all. There is a substantial debate over what Cantor thought and when he thought it; the so-called "paradoxes of naive set theory" may in fact not attach to Cantor's viewpoint at all, at least to the later Cantorian viewpoint. --Trovatore (talk) 10:31, 30 January 2011 (UTC)[reply]
Oh, I was not purporting to convey biographical information about Georg Cantor, merely using "Cantorian" as a conventional label (because, in the heat of the moment, it did not occur to me simply to call it "universal" comprehension). It's entirely possible that it's not historically well-founded; if so I'd assign some tentative blame on Hilbert with his famous quip about paradiseHenning Makholm (talk) 10:55, 30 January 2011 (UTC)[reply]
I really don't see why. Why do you want unrestricted comprehension? As far as I can tell it's simply a category confusion; it comes about from confusing the extensional notion of set with the intensional notion of class (i.e. predicate). --Trovatore (talk) 11:03, 30 January 2011 (UTC)[reply]
Huh? I don't "want" unrestricted comprehension. It was just an example of an idea that looked pretty neat at first, but then turned out to be disasterous. The real point was that my intellect, however proud I might be of it otherwise, is too feeble to really grok the benignness of ZF replacement. This may be a failing of mine, but I'm arrogant enough to suppose that I can't be the only one suffering from it. –Henning Makholm (talk) 11:21, 30 January 2011 (UTC)[reply]
Oh, well, I was casting around for your meaning wrt the "paradise" thing, and what I came up with was "paradise is where what we want to be true, is true". So from that I got that you wanted unrestricted comprehension to be true. Sorry if that's not what you meant.
Your point about failing to refute thing experimentally, despite much effort, is part of what I was talking about with large cardinals. No one I know of thinks that it's self-evident that, say, Woodin cardinals exist. That doesn't mean people don't think it's true. But it's a discovered truth, not a self-evident one. I think we have an article on quasi-empiricism in mathematics (though Quine himself was not enthusiastic about higher large cardinals; not sure about Putnam). --Trovatore (talk) 11:36, 30 January 2011 (UTC)[reply]
Of course the reason people would want unrestricted comprehension is that it's an immediate consequence of the concept of set as "any possible collection of mathematical objects". Of course this concept turns out to be inconsistent, and the type of naive set theory built upon it is inconsistent, but that doesn't change the fact that unrestricted comprehension is intimately attached to the natural-language concept of "set".
Historically, it seems that Cantor's earlier work (Grundlagen) in set theory was unarguably inconsistent, as he treated the class of all cardinal numbers as a set, while his later work (Beiträge) might or might not be, depending on how it's read. The main difficulty is that Cantor was sufficiently vague about what he meant by "set" at the end that it's hard to tell. The reference Frápolli 1991 from naive set theory examines that issue more closely [1]. — Carl (CBM · talk) 12:48, 30 January 2011 (UTC)[reply]
Regarding "paradise", I was alluding to the fact that Hilbert famously called some kind of set theory das Paradies, das Cantor uns geschaffen hat, and speculated that this might have had the historical effect of people mistakenly attributing to Cantor some of the ideas that Hilbert spoke of. Now, looking closer into it, I find that Hilbert said this long time after the paradoxes of naive set theory had been discovered. So whatever Hilbert was speaking about certainly didn't include unrestricted comprehension, and I hereby retract my speculation. –Henning Makholm (talk) 13:08, 30 January 2011 (UTC)[reply]

Re the original IP: another thing to be careful about is that, although ZFC is currently the most common foundation for undergraduate mathematics, it is not necessarily a foundation for all mathematics. For example:

  • Set theorists routinely study aspects of mathematics using hypotheses that cannot be proven in ZFC. These include large cardinals, determinacy axioms, and examples in set theoretic topology that explicitly assume the continuum hypothesis or its negation.
  • Most non-set-theoretic mathematics can be done in ZFC, but not quite all. In particular, the original proof of Fermat's last theorem is an interesting case. Although the proof can be reworked in ZFC, the literal methods used in the proof employ Grothendieck universes and cannot directly formalized in ZFC the way that elementary group theory can be.

There are many ways of interpreting these things, including reductionist ways that find a way to interpret all the work in ZFC. But the claim "ZFC is the foundation for all mathematics" is more subtle than elementary books let on. — Carl (CBM · talk) 21:09, 30 January 2011 (UTC)[reply]

Terence Tao has claimed[2] that part of the undergraduate fundamental theorem of linear algebra's statement cannot be formalized in ZFC. The theorem says (among other things) that every finitely-generated real vector space V has a dimension dim(V). The issue is that those vector spaces form a proper class and not a set, so ZFC cannot quantify over them. 71.141.88.54 (talk) 23:27, 30 January 2011 (UTC)[reply]

There is no problem in quantifying over a proper class, only in collecting it into a completed whole. I think you've overinterpreted what Tao wrote. Basically what he's saying (I think; I only gave it a glance) is that dim is not strictly speaking a function, because its domain is a proper class. That's fine. You just reinterpret the statements in a completely routine way to use a definable predicate for dim rather than a function in the strict sense. --Trovatore (talk) 23:44, 30 January 2011 (UTC)[reply]
I took a second glance, and it's a little more interesting than what I was saying. His point, I think, is that you can do the definable-predicate thing I was talking about, but only if you actually know the definition. What you can't do directly in first-order logic is say "There is a (class) function dim with these properties, only I don't know what function it actually is".
Whether that's really the content of the theorem he quotes, though, is arguable, I think. Translating theorems from English into formal language is not quite automatic, and occasionally one has to look at the proof to see what the theorem actually means. In this case it's not a pure existence statement. It means something like "I can give you an explicit definition of dim such that the following properties hold". --Trovatore (talk) 00:32, 31 January 2011 (UTC)[reply]
This is a well-known (or well-ignored) issue in second-order arithmetic and other theories, too; it's not at all unique to set theory. When we write an English phrase with quantifiers over higher types (which, in the context of set theory, is analogous to quantifying over proper classes), the convention is that when this statement is formalized, the quantifiers are removed in a standard way to get something that can be written in the language of the theory at hand. — Carl (CBM · talk) 00:50, 31 January 2011 (UTC)[reply]
Trovatore, I still quite confused about "class functions" and "ordinary functions. By ordinary function I mean the usual definition: a set of order pairs blah blah. By class functions I mean things like the singleton function, the union function, the binary ordered pair function. The class functions are made by first proving a unique object with certain properties exist for any set (or pairs of sets or other stuff) and then introduce a new function symbol for it. What I dont understand is: how do we think of class functions in a non formalist manner? They are certainly not sets so do we think, in realist terms, the collection of all sets and then view class functions as literally taking sets to other sets (instead of coding them as ordered pairs, which is impossible anyway)? Money is tight (talk) 14:58, 31 January 2011 (UTC)[reply]
Generally class functions are a convenient way of talking about some definable predicate (maybe with parameters — of course in set theory a single arbitrary parameter is as good as as many as you want). So in general you have a formula φ with three free variables, and a parameter x, such that for every set y, there's exactly one set z such that φ(x,y,z) holds; then φ and x define a class function that takes y to z. Make the obvious adjustments if you don't want the domain to be all sets.
Note the big difference from ordinary functions: We don't have any way, in this context, of talking about arbitrary functions whose domain is a proper class, but only definable ones (with parameters). If you want to talk about arbitrary class functions, you're into the domain of second-order logic. --Trovatore (talk) 20:01, 31 January 2011 (UTC)[reply]
There are also NBG and MK set theory, which do allow defining class functions; one can perfectly well state "there is a class function that assigns the dimension to every finite dimensional vector space" in these set theories, by quantifying over classes. Better, you can prove in NBG (and hence also in MK) that there is a class function like that. NBG (but not MK) is conservative over ZFC: any fact about sets expressible in the language of ZFC and provable in NBG is provable in ZFC already. Indeed, you can get a model of NBG from a model of ZFC by taking all the definable classes as classes, although the intended model of NBG has all classes rather than just the definable ones. — Carl (CBM · talk) 20:12, 31 January 2011 (UTC)[reply]
And article links, for curious readers: NBG set theory, MK set theory. –Henning Makholm (talk) 20:59, 31 January 2011 (UTC)[reply]
Well, the intended model of Kelley–Morse is not really (V, P(V)), which doesn't really make sense. It's (Vκ,Vκ+1), where κ is some inaccessible cardinal. It's not clear what the "intended model" of NBG would be — I tend to think of it as V and the definable classes of V. --Trovatore (talk) 21:49, 31 January 2011 (UTC)[reply]
Yes, it can't be (V,P(V)) if P(V) is supposed to be a collection of sets. I prefer the approach that, in the intended interpretation, the set quantifiers range over all sets and the class variables range over all classes. That doesn't require a commitment whether there are nondefinable classes. That interpretation gives the axioms their usual (disquotational) meanings, which is the property I usually associate with the intended interpretations of foundational theories. — Carl (CBM · talk) 22:25, 31 January 2011 (UTC)[reply]
Except that KM just doesn't really make sense in that interpretation. If V is a completed totality then it has to be a set; if it's not a completed totality then it doesn't make sense, as far as I can see, to talk about arbitrary subcollections of it. --Trovatore (talk) 22:40, 31 January 2011 (UTC)[reply]
Whatever V is (completed or not), it's certainly a proper class, so I am comfortable asserting that at least one proper class exists. The class of ordinals is a second one. To avoid getting into a lengthy discussion, we should move to my talk page; you can have the last word on the matter here. — Carl (CBM · talk) 22:58, 31 January 2011 (UTC)[reply]
My position is that V isn't, strictly speaking, anything. It's not an object. Statements regarding it are to be re-interpreted in a stereotyped way that I'm sure you understand.
The reason is that if V did in fact exist, it would have to be a set. The intuitive concept of the von Neumann hierarchy doesn't allow you to stop before you get to the end (and you never do). So if V existed, then ON would also exist, and being a wellordered collection of ordinals would itself have to be an ordinal, and now you have Burali-Forti and the other antinomies. --Trovatore (talk) 03:55, 1 February 2011 (UTC)[reply]

Constructing mathematical models.

I am trying to (self) learn how to construct mathematical models and then solve them. The question I am considering is this: Suppose I have a culture of bacteria in a petri dish which divides itself in two identical copies of itself every 10 minutes. For an arbitrary time unit I wish to write and solve both discrete and continuous time models which give me the number of cells in time . What I am deducing right now is that if according to the divsion algorithm then is the discrete model. Can someone tell whether this is correct and how to solve it. Also how do I deduce and solve the continuous time model. Thanks-Shahab (talk) 08:25, 29 January 2011 (UTC)[reply]

What you have constructed isn't a discrete model - notice that your N is not restricted to taking whole number values, as it would be in a model in which bacteria were discrete rather than continuous. Instead, you have a continuous model that is actually a series of linked linear models; for the first 10 minutes the number bacteria increases at a constant rate of N/10 per minute; between between and they increase at a constant rate of 2N/10 per minute; then 4N/10 for the next ten minutes, and so on.
An example of a discrete model is if the number of bacteria is assumed to instantly double every ten minutes, so that . Gandalf61 (talk) 10:27, 29 January 2011 (UTC)[reply]

See exponential growth. N(t)=N(0)2t/10 is your formula for the number of cells N at a given time t, knowing the initial number of cells N(0). This function satisfies the difference equation N(t+10)=2N(t). The logarithm of N grows linearly: log(N(t))=log(N(0))+(log(2)/10)t. Bo Jacoby (talk) 11:40, 29 January 2011 (UTC).[reply]

Thank you for helping me. I see that both of you have constructed the discrete model by taking r=0, so that essentially time is measured in intervals of 10 minutes. Now my question is how do I solve the continuous model .-Shahab (talk) 14:54, 29 January 2011 (UTC)[reply]
But your model is already solved - you have an expression that gives you N at any time, given N at one known time t - so not sure what further information you are expecting here. Gandalf61 (talk) 14:58, 29 January 2011 (UTC)[reply]
I am looking for an expression for N(t) which is not involving any kind of recurrence, a formula in which you plug in t and get the requisite number of bacteria, provided N(0) is given. I think Bo Jacoby gave such an expression N(t)=N(0)2t/10 for the discrete case-Shahab (talk) 15:30, 29 January 2011 (UTC)[reply]
Bo Jacoby gave an expression for the continuous case. --COVIZAPIBETEFOKY (talk) 21:55, 29 January 2011 (UTC)[reply]
Okay. Can you bear with me please and explain how did Bo Jacoby arrive at this continuous solution strictly from . Also is the solution for the discrete case correct?-Shahab (talk) 03:26, 30 January 2011 (UTC)[reply]
You don't need the floor signs in the discrete case, because "discrete" means that you're only going to apply the formula when t is a multiple of 10 anyway, so the floor does nothing.
Once you remove the floor signs, you have an expression that is meaningful for any t, is nicely simple and C, and agrees with the discrete solution on points where the latter is defined. What more do you want for a continuous model? You reasonably want your functional equation for t0 that are not multiples of 10, but it is very easy to check that satisfies that. What is there not to be satisfied with about that model? –Henning Makholm (talk) 05:19, 30 January 2011 (UTC)[reply]
Is it possible that there is still confusion over the meaning of the question? (though Gandalf61 did explain above). Real bacteria don't usually wait for ten minutes, then all divide at once (though in certain circumstances there might be chemical co-ordinating messages in a few species). Real bacteria are dividing "almost" at random, with the average being a doubling every ten minutes in your example. This is why a continuous model is valid and usually accurate for real bacteria. Dbfirs 08:36, 30 January 2011 (UTC)[reply]
Thanks to all your comments almost all my doubts are resolved. Only one more question please: I had deduced as my model initially. How do I get N(t)=N(0)2t/10 from here? -Shahab (talk) 09:04, 30 January 2011 (UTC)[reply]
Your problem is that is wrong. You don't get anywhere from there.
To see that it is wrong, set N(0)=100, and imagine moving first to N(5) and then to N(10). By your formula you'd get N(5)=N(0)*(1+0.5)=150 and N(10)=N(5)*(1+0.5)=225, but N(10) was only supposed to be 200. –Henning Makholm (talk) 09:46, 30 January 2011 (UTC)[reply]

Ostensibly simple probability question?

How do I prove that for all A and B? Ostensibly it seems like such a simple problem but I can't do it! 220.253.245.51 (talk) 11:30, 29 January 2011 (UTC)[reply]

for
and undefined elsewhere.
Bo Jacoby (talk) 11:49, 29 January 2011 (UTC).[reply]
Thanks, I tried that actually but that assumes . Can that be proven? 220.253.245.51 (talk) 11:55, 29 January 2011 (UTC)[reply]
as a matter of naive set theory. If you need help proving that, you need to disclose exactly which formalization of unions and intersections you have to work from. –Henning Makholm (talk) 12:17, 29 January 2011 (UTC)[reply]
(1) A ⊆ A∪B
(2) if A ⊆ C then A∩C = A
Bo Jacoby (talk) 01:47, 31 January 2011 (UTC).[reply]

Taking from a large population without replacement

Suppose there is a bag in which there are n balls, of which w are white and the remaining (n - w) are red. Then, on drawing balls without replacement, the probability that the which we are about to draw is white is not independent of the results of previous results: If we have already drawn many reds, then it is more likely that the next one be white, and vice-versa.

To be precise, if we draw k balls and if X is the random variable that is the number of white balls drawn, then

However, in examination questions on the binomial distribution, often they say "assume n is large", and expect me to use the binomial distribution to model X:

Is the binomial distribution a valid approximation for large n, and how can it be justified? --jftsang 22:09, 29 January 2011 (UTC)[reply]

If you expand both the approximation and the exact expression, you'll see that the approximation is obtained by replacing w!/(w-x)! with w^x, (n-w)!/(n-w-k+x)! with (n-w)^(k-x) and (n-k)!/n! with n^k. The reason this is ok for large numbers is that w!/(w-x)! is equal to , and for fixed x and as w tends to infinity, the ratio of this expression to w^x tends to one (think like this: for really big w, (w-1) is pretty much the same as w...). The same argument covers the other terms. Quantitative estimates will be harder to come by: you'll need to look carefully at the ratio I just talked about to see exactly how it behaves. Tinfoilcat (talk) 00:56, 30 January 2011 (UTC)[reply]
For the binomial distribution to hold, the probability of drawing a white ball needs to be constant. Initially that probability is w/n. After drawing one ball, it is either (w-1)/(n-1) or w/(n-1) (depending on whether you drew a red or white ball). Let's assume you drew a white ball and the probability is now (w-1)/(n-1) (the argument for the other case is similar). Let's divide top and bottom by n (since we're going to assume n is large, it helps to have it only appear in denominators). We get:
If you let n tend to infinity, you'll get:
which is the initial probability. So, for an infinitely large bag, the probability doesn't change (which isn't surprising - infinity minus one is just infinity).
You can get an idea for how good an approximation that is for a large, but not infinite, bag by looking at the Taylor series. We'll use the series:
In this case, we get:
Since n is large, we'll assume 1/n3 is negligible and will ignore it. That leaves us with just:
For really large n, that second term is going to be tiny. For slightly smaller n, we can see that if w and n are of similar size, it's still going to be small. If n is a lot larger than w, but still not very large, it could be substantial. That tells us that the binomial distribution is a good approximation for "large" n and that what constitutes "large" depends on the relative sizes of n and w - the larger w is, the less large n needs to be. --Tango (talk) 02:13, 30 January 2011 (UTC)[reply]
Actually, I need to take some of that back. For the other case, where you drew a red ball, you get the probability of the second ball being white to be (again, ignoring terms with a 1/n3 in) to be . In that case, the unwanted term is small for large n and for small w (rather than large w, as before). That means the only way to get a good approximation is to have a really large n. (How large obviously depends on how precise you want need to be.) --Tango (talk) 02:18, 30 January 2011 (UTC)[reply]


For sampling without replacement you want the hypergeometric distribution instead of the binomial distribution. 71.141.88.54 (talk) 08:33, 30 January 2011 (UTC)[reply]

If you want to be precise, yes, but the OP was talking about approximating it by binomial, which is very common since (for large enough n) the approximation is very good and the binomial distribution is easier to work with (and people are generally more familiar with it). --Tango (talk) 16:15, 30 January 2011 (UTC)[reply]


January 30

First digits of 10^n

How can I compute the first x digits of 10^n? N is not necessarily an integer. 149.169.132.90 (talk) 02:15, 30 January 2011 (UTC)[reply]

From first principles? Compute successive square roots of 10 to the desired precision (and add a few digits to absorb rounding errors in the next phase) until the value rounds to 1. Express the fractional part of n in binary, and multiply those of the square roots that correspond to 1 bits. (See exponentiation by squaring for theory of the latter step). If you want to be completely free of rounding errors, you can repeat the entire computation twice, always rounding up and down respectively, and increase the precision of intermediate values if the two results do not agree to the precision you desire.
(This is how Henry Briggs computed the first base-10 logarithm tables). –Henning Makholm (talk) 03:09, 30 January 2011 (UTC)[reply]
What do you mean when you say "Express the fractional part of n in binary"? Does this mean multiply by 10 ^ 8 (or whatever my precision is) to get an integer, and then express in binary? 149.169.132.90 (talk) 04:18, 30 January 2011 (UTC)[reply]
No, I mean express it as a binary fraction. –Henning Makholm (talk) 05:01, 30 January 2011 (UTC)[reply]
You're trying to find the antilogarithm to the base 10, where n is the logarithm (to the base 10). You can convert n to a natural logarithm by multiplying by the natural log of 10. Then 10^n is exp(n ln 10) which you can compute with the Taylor series for the exponential function. 71.141.88.54 (talk) 08:31, 30 January 2011 (UTC)[reply]

Arc length proof

hwo would I prove that the length of the curve defined by c(t)=(u(t), v(t)) from t=a to t=b is ? 24.92.70.160 (talk) 03:34, 30 January 2011 (UTC)[reply]

Think about Riemann sums approximating this integral, along with the Pythagorean theorem. That does it. Michael Hardy (talk) 05:05, 30 January 2011 (UTC)[reply]
Some presentations even take your integral to be the definition of "length of the curve". If you don't have that as a definition, then you most likely need to work directly with whatever definition of curve length you do have. –Henning Makholm (talk) 05:09, 30 January 2011 (UTC)[reply]
I'm not sure it needs a formal proof, it follows from the definition of integration. If you insisted on a formal proof then you'd only need to prove that integration does what we're told it does. The point c(t) gives the position of a particle at time t. Then the derivative (dc/dt)(t) gives the velocity of that particle at time t. The magnitude of the velocity is the speed. Using a dot to mean differentiation with respect to t, the magnitude of (dc/dt)(t) is
Then, by the definition of integration, the integral of the speed between t = a and t = b (where ab) tells you how far the particle has travelled in that time. Thus
Notice that in differential geometry the phrase Euclidean arc-length is often used to tell this arc-length apart from others. That's because the arc-length in your question is a differential invariant of Euclidean transformations. There are other notions of arc-length that are invariants of different transformation groups, see for example special affine arc-length. Fly by Night (talk) 13:49, 30 January 2011 (UTC)[reply]
"The definition of integration" in this context is usually taken to mean limits of Riemann sums. And the idea that the magnitude of a vector is the square root of the sum of the squares of the components is the Pythagorean theorem. Michael Hardy (talk) 18:11, 30 January 2011 (UTC)[reply]

Fly-by-night, you're confused about what is a definition and what is a theorem. Michael Hardy (talk) 18:12, 30 January 2011 (UTC)[reply]

Dude, go away. Fly by Night (talk) 19:56, 30 January 2011 (UTC)[reply]
Not gonna happen. You have a mathematician helping you here, and that's all you can say to your benefactor? Michael Hardy (talk) 23:28, 30 January 2011 (UTC)[reply]
Benefactor? Helping me? Get over yourself! You're not helping, you're just being a supercilious old fool. You're more interested in correct people and pointing out minor semantic errors than you are in helping the OP. Although I shouldn't expect anything else from you: a man that spends his time chastising editors, on their own talk pages, for using - instead of − and instead of . My post gave the OP a physical and intuitive understanding of why arc-length is expressed the way it is. But, worried that someone else's reply might actually be more useful than your own, you decide to go on the offensive. It's not a competition. We don't get rosettes for the best answer. It's a collaboration. What with your formalism and my physically intuitive reply the OP had a perfect answer; but you can't accept that. As for you being a mathematician, well, the last time I checked my pay packet, I too was employed by a mathematical research institute. Get over yourself, get a life, and get back to editing. Fly by Night (talk) 01:39, 31 January 2011 (UTC)[reply]
Get over yourself. Constructive criticism ≠ personal attack, so stop sending the latter in response to the former. --COVIZAPIBETEFOKY (talk) 03:48, 31 January 2011 (UTC)[reply]
Wow, pots and kettles spring to mind. Fly by Night (talk) 13:34, 31 January 2011 (UTC)[reply]
<sarcasm>Sounds like Good Will Hunting. Except more realistic.</sarcasm> Dmcq (talk) 12:31, 31 January 2011 (UTC)[reply]

"Fly-by-night", listen: I answered a question. You responded by saying "go away". Michael Hardy (talk) 06:08, 31 January 2011 (UTC)[reply]

"Michael Hardy", you told me I was confusing definitions and theorems; that's when I told you to go away. Fly by Night (talk) 13:35, 31 January 2011 (UTC)[reply]
Which, standing alone, is a somewhat baffling accusation, because both curve lengths as well as (Riemann) integrals have several slightly different equivalent characterizations, each of which can be taken to be a definition making the others theorems. It's not objective which is which. –Henning Makholm (talk) 13:43, 31 January 2011 (UTC)[reply]
Truth to be told, I don't think FbN's comment was very illuminating, but it certainly didn't deserve that kind of attack. In fact I wonder whether Michael Hardy actually intended to attack my comment about differing definitions, but overlooked my signature and thought FbN had written it, because we both wrote at the same indentation level. –Henning Makholm (talk) 13:46, 31 January 2011 (UTC)[reply]
There's nothing wrong with my comment saying you were confusing theorems with definitions. It doesn't make sense to define arc length in a way that only works when the curve is sufficiently differentiable, when a much simpler definition involving neither derivatives nor integrals is available, nor does it make a lot of sense to define integrals by using antiderivatives unless you want to argue in favor of a new way of developing the theory, and that's not what you were doing. Your answer was rude. Michael Hardy (talk) 17:33, 31 January 2011 (UTC)[reply]
Maybe the answer would depend on whether the OP was studying an analysis or geometry course. If its analysis then the Riemann sum would be the way to go. If the course has a more geometric emphasis then that approach is overkill.--Salix (talk): 20:01, 31 January 2011 (UTC)[reply]
Since the word "prove" was used, Riemann sums didn't seem like overkill. Michael Hardy (talk) 20:14, 31 January 2011 (UTC)[reply]

See rectifiable curve. It is basically routine to prove this formula from the definition given there (at least, say, for continuously differentiable curves). 166.137.140.202 (talk) 18:23, 4 February 2011 (UTC)[reply]

Converting from rectangular to polar form

I have three impedances that need adding together. And conversion to polar form. Could someone explain how that's done?

So, for instance, the example in my book has:

I don't understand how they got from one to the other. Thanks, Dismas|(talk) 09:09, 30 January 2011 (UTC)[reply]

You need to do the addition in rectangular coordinates and then find the polar form afterwards:
Henning Makholm (talk) 10:02, 30 January 2011 (UTC)[reply]
By the way you have added the electrical impedances in parallel and the result you have is the inverse of the resultant impedance. Dmcq (talk) 10:53, 30 January 2011 (UTC)[reply]
Thanks, Henning. I understood everything up to the bit. But you're just finding out the angle based on the two sides of the triangle, I think. Dismas|(talk)
Yes. I assumed you had learned about Euler's formula . It seems that you write it . –Henning Makholm (talk) 13:33, 30 January 2011 (UTC)[reply]
I did in my calculus class last year but, with the exception of the name, don't remember it. I'm getting another degree while working, so I only have time (and energy) for one class at a time. So when my circuits teacher cruises through things, it's a bit daunting. Everyone else is taking the math courses at the same time whereas I've had to take them in past semesters. Dismas|(talk) 21:56, 30 January 2011 (UTC)[reply]
Well, the point of classes usually is that you learn something that you might want to remember for later because it is actually useful. (Otherwise, it's not a class but a psychology experiment). The standard mathematical way to write down a parameterization of the unit circle in C is one of those things... –Henning Makholm (talk)
Wow. Thanks for the chastising for not remembering a couple things from a calculus class that I took a year ago. I'll try to remember that there are people like you who never forget anything... but I'm probably too retarded. Dismas|(talk) 10:40, 1 February 2011 (UTC)[reply]

Explanation of a few topology concents

Hi all,

I'm taking a first course in algebraic topology and I'm trying to get my head around a few of the concepts, but as many of you who were in my position at one point probably know, there's often some very weird behavior which is rather hard to picture. I'd really appreciate any aid you can give me with the following problem.

Given continuous f: , define for the space given by gluing an n-ball to X via the (smallest possible) equivalence relation identifying x in the sphere with f(x) in X, the quotient of the disjoint union of X and over the equivalence relation. Prove that if f, g homotopic maps then and are homotopy equivalent. By using this, show that the dunce hat, given by the cone on in with vertex (0,0,1) and identifying points with for , is contractible (hint: it may help to unfurl the cone along .)

The first part of the problem is I suppose simply an issue of algebra: my thought is that we 'glue' the balls onto the spaces as explained via the f and g equivalence relations, then we don't really know anything about the spaces except that both f and g are homotopic - can we simply apply the homotopy F from f to g to X and the sphere in , effectively continuously deforming the set of points identified by the equivalence relation on points f(x) until we reach the set of points identified with g(x), and then extend it to an identity map on the points of which are not in X or on the sphere? Then once we have this function which is some sort of 'F patched together with the identity', we can take the reverse of F, from g to f, and somehow also patch that together with the identity. I can vaguely picture what's going on, but I'm having trouble formalising the argument - intuitively I find it hard to picture what's going on here, even in the n=2 case it isn't necessarily obvious to me.

The second part I can see they want to take the 'hat' to be the space - one of the issues I'm having is that the earlier part of the problem identifies points between the otherwise disjoint union of the ball and the space X, whereas in this case we're talking about identifying part of the cone with another part of the cone - even though it is a 1-sphere in that sense, surely it is intrinsically part of the cone already and not 'disjoint' from it, so I don't see how the first part applies. Am I right to be concerned about this? Perhaps it isn't really an issue topologically speaking, though given how one point can often ruin everything I would be surprised - but at any rate, I think the obvious map from to the image with which we're identifying points on the cone is ; i don't honestly see how 'unfurling the cone' in the standard way to make a triangle with sides identified in the normal way helps particularly, and again, any comments would be really honestly appreciated, just so I can figure out what's going on here - once I understand, I presume it's just a case of finding a map to which f is homotopic (the identity map perhaps?) and then showing the space which it is homotopy equivalent to is obviously contractible. Thank you very much for the help, it will genuinely make a big difference to me if I can get my head around this! Typeships17 (talk) 22:13, 30 January 2011 (UTC)[reply]

The first part is a special case of Proposition 0.18 in Hatcher's (free) online book, which you can find here. As for the second part, you can view the dunce cap as a triangle with its edges identified as here. Thus it is the space obtained by attaching B2 to a circle X via a map f : S1X that winds around twice in one direction and then once in the opposite direction. That map is homotopic to one that winds around just once. Thus by the first part, the dunce cap is homotopy equivalent to a disk. 82.124.101.35 (talk) 08:14, 31 January 2011 (UTC)[reply]
Typeships17, as 82.124.101.35 remarked, it's in Hatcher's book. I took a look and he states it for CW pairs. When I first read your question I took X to mean an arbitrary space. After giving it some thinking I felt the need to extend certain functions on subsets of X to the whole of X. The most general extension theorem I know of is Tietze extension theorem and that only applies to functions to the reals and from normal spaces. So do you mean by X an arbitrary space or as in Hatcher's statement? Money is tight (talk) 14:35, 31 January 2011 (UTC)[reply]
As far as I'm aware, X is simply an arbitrary space. Does that make the proof incorrect? I have the book with me, but I'm not knowledgeable enough about CW complexes to know whether or not it's still okay... Typeships17 (talk) 23:39, 31 January 2011 (UTC)[reply]
Proposition 0.18 applies as is to the case where X is an arbitrary space. The point is that (Bn, Sn - 1) is a CW-pair. But really, you don't need to bother with the case of a general CW-pair. If you refer to the proof of Proposition 0.16 (which is used in the proof of 0.18), the special case in which the CW-pair is (Bn, Sn - 1) is covered in the first three lines (and then used to prove the general case). 82.124.101.35 (talk) 04:08, 1 February 2011 (UTC)[reply]
Okay, I understand the proof of the first part - it has quite a nice visually appealing sketch proof now that I look at it. However, the second part isn't clear to me - how is the dunce cap a situation where we can attach the B^2? I guess we define X to be the cone itself, then that identifies 2 edges of the triangle, though how we identify the bottom edge as a 'circle' as it was on the cone is beyond me, perhaps i'm mentally unfolding the object wrongly. Then the map f as given moves up the two 'sloped' edges of the triangle as t increases from 0 to 1, or equivalently moving around the circle. How do we identify the circle/triangle appropriately to obtain the right space for the homotopy equivalence? I didn't completely follow the first response, sorry - this is all fairly new to me, i'll do my best to try and follow. Typeships17 (talk) 18:41, 1 February 2011 (UTC)[reply]
The first thing is to convince yourself that the definition you gave of a dunce hat is the same as the quotient space pictured at dunce hat (topology). The cone can be pictured as a triangle with two sides identified, as you say. It is best to do this in such a way that the two sides identified correspond to the segment (1-t,0,t). The identification of the boundary circle with the segment in your original definition can then be viewed as the identification of the remaining side of the triangle with the other two.
Taking for granted that the dunce hat is as in the figure (and forgetting the cone), take X to be a circle. The triangle is a copy of B2. Map its boundary onto X in the way suggested by the identifications: starting at the bottom left vertex, go twice around X in the positive direction, then once around in the negative direction. This gives a map f from the boundary of the triangle to X, and since it is surjective, the resulting construction is exactly the dunce hat. Now f is homotopic to any map g that goes around once positively, which could be taken to be the identity. (Homotopy classes of loops without basepoint correspond to conjugacy classes in π1(X), which is isomorphic to Z, with the isomorphism counting the number of times a loop winds around X.) The space obtained by attaching B2 to X via g is just B2, which is contractible. 82.124.101.35 (talk) 22:33, 1 February 2011 (UTC)[reply]
Ah, I see! I was thinking about it backwards: I was taking the circle and saying that as you go around it once, you'll move up one corresponding edge of the triangle/cone, rather than looking at what happens when you move around the triangle and noticing that since our 2 'sloped' sides are in correspondence with the base which is a circle, we get a 3-to-1 mapping around the circle since every point in the triangle boundary is identified with 2 others. My problem was trying to map in the most literal sense from a circle, onto what I assumed must be the triangle as X, not from the boundary of the triangle onto another space. I really can't thank you enough :) Typeships17 (talk) 02:10, 15 February 2011 (UTC)[reply]
You're welcome. 82.124.101.35 (talk) 03:34, 3 February 2011 (UTC)[reply]


January 31

Start at different points, add 1. Evaluate evenness.

This is a situation I just came up with. Not sure if it even has enough information to answer.

You start out with the number 0. Over the next 100 seconds +1 is added to the number at random intervals.
You start out with the number 1. Over the next 100 seconds +1 is added to the number at random intervals.

Assuming you ran this 50,000 times which starting point would give you more even numbers, 0 or 1? Or would it not matter? Yes, I already know I'm an idiot. --71.240.162.87 (talk) 06:25, 31 January 2011 (UTC)[reply]

You're going to have to describe more precisely what you mean when you say "at random intervals." —Bkell (talk) 06:33, 31 January 2011 (UTC)[reply]
My first guess as to the meaning of "random intervals" would be a Poisson process. But it's specified that it takes 100 seconds, but NOT specified what the average frequency of these additions of +1 is, making it impossible to know what to make of the "100 seconds". Either way, the number of even numbers you get in 100 seconds is random. In the first case, you already have an even number at the beginning; in the second case you don't; that's the only relevant difference between the two processes. If the additions of +1 occur on average every β seconds, then the even numbers occur on average every 2beta; seconds. The expected number of even numbers you'd get in the first case would be 1 more than in the second case. But if β is small, then the standard deviation of the number of even numbers you'd get in either case would be large, so you'd have close to a 50% chance of getting more even numbers in the first case and a 50% chance of getting more even numbers in the second case. "Close to", but not exact. More tomorrow maybe..... Michael Hardy (talk) 07:24, 31 January 2011 (UTC)[reply]
I think the OP is asking about the probability that the final number after 100 seconds is odd or even. That is, what is the probability that a valuable with Poisson distribution is even? Clearly this depends on the distribution's parameter -- with very low the sum is overwhelmingly likely to be 0, which is even, but high one expects the probability of odd and even sums both to converge towards 1/2.
Unless I'm mistaken, P(even number of events) is . Therefore, starting with 0 will always give a higher probability of an even end sum than starting with 1, but the difference will be extremely slight when one expects many increments to happen during the 100 seconds. –Henning Makholm (talk) 08:32, 31 January 2011 (UTC)[reply]

Interchange of summation and differentiation with Fourier series

I'm aware of simple, piecewise-smooth functions that can be characterized almost everywhere by Fourier series but where term-wise differentiation leads to non-convergent series, illustrating why the operations of differentiation and summation are not always interchangeable. However, are there examples of such functions whose Fourier series, when differentiated term-wise, lead to convergent but incorrect results almost everywhere? By incorrect, I mean where the term-wise differentiated Fourier series does not correspond to the derivative of the function represented.--Leon (talk) 08:13, 31 January 2011 (UTC)[reply]

Probabilistic Inclusion Exclusion.

How do I prove the probabilistic version of the Inclusion-exclusion principle without using induction. I understand the proof sketch given in the article but what I want is the probabilistic version without taking expectations. The reason is that the book I am reading from (Ross's A first course in Probability) hasn't defined expectations yet and has still given a sketch of the result by counting. This would have appeared natural to me in the counting measure, but I cant relate that to probabilities, can I?-Shahab (talk) 11:42, 31 January 2011 (UTC)[reply]

If the counting is of equally probable events then yes you can just use a measure equal to the probability of the events in a set, that is just the number of elements in a set compared to the overall number. You can use the inclusion-exclusion principle directly to calculate with those probabilities. is that what you're thinking of or am I missing something? Dmcq (talk) 12:54, 31 January 2011 (UTC)[reply]
What exactly is it that you want to prove? Why do you want to avoid induction? It is not clear what you think the connection between induction and expectations is (I cannot offhand think of one).
In general, the inclusion-exclusion principle works for any measure, including both the counting measure and probability measures. The proof sketched in our article supposes that you can integrate over the measure, but that's just a shortcut (and if you know enough measure theory to think of the counting measure, you can probably do the integration even without knowing that "integral over the probability measure" is later going to be abbreviated "expectation"). You can prove it instead by induction over the number of sets, in which case all you need is finite additivity of the measure. But again, why don't you want induction? –Henning Makholm (talk) 13:34, 31 January 2011 (UTC)[reply]
Inclusion-exclusion is a special case of Moebius inversion see here, where you have to chose the partial order to be set inclusion. Count Iblis (talk) 13:49, 31 January 2011 (UTC)[reply]
Hmmm...It seems that as usual I was confused myself. (Just to give a little background I am a self learner). If I understand correctly, there are two proofs: 1. Induction. 2. Integration. My question then was (and is still) this: Is there any other "simpler" proof?. Thanks to all of you.-Shahab (talk) 14:12, 31 January 2011 (UTC)[reply]
The whole business becomes rather obvious if you use sum up 1−(1−a)(1−b)(1−c)... for all the points in the union of A,B and C setting a,b,c to 1 or 0 depending on whether they are in those respective sets. Then ab for instance is 1 for the intersection of A and B, and the original expression is 1 for all the elements in the union of the sets. The − here works out as being the exclusive or operation. Dmcq (talk) 19:26, 31 January 2011 (UTC)[reply]

February 1

Can somone help with the following proof

Hi everybody/ I seem to have difficylties in understanding a specific thing in the followind proof of the following claime. Maybe somone can help? claim: There exist a countable non-weakly Frechet-Urysohn space. and here is the proof: Let x be an arbitrary point of the Stone-Cech reminder ω* of the discrete space ω. Then is the desired example.Indeed, it is a countable space. Let us assume that X is Weakly Frechet-Urysohn. As there must br a countable infinite disjoint familly F such that x(RZ)F. Let A and B be any two infinite subfamilies of F. Then both and must hold. But this is impossible if we choose A and B to be disjoint subfamilly of F as in this case and are disjoint subsets of ω and x is an ultrafilter of ω. The thing which I don't understand is this, Why does the fact that and are disjoint implies that it is not possible that both and ? Here are the required definitions: Definition: A point x is called weakly Frechet Urysohn point if whenever there exists a countable infinite disjoint family F of finite subsets of A such that for every neighborhood V of x the subfamily is finite. If every point of a space is a weakly Frechet-Urysohn point then this space is called a Weakly Frechet Urysohn space. Definition: A point and a countable infinite disjoint family F of X are said to be in the Reznichenko relation (Rz), written x(RZ)F, if the following holds: For every neighborhod V of x, the subfamily is finite. Thanks for any of you who will be able to help! Topologia clalit (talk) 17:27, 30 January 2011 (UTC)[reply]

For any set it holds that iff S is in the ultrafilter that represents x. This cannot be true for two disjoint sets. –Henning Makholm (talk) 14:58, 31 January 2011 (UTC)[reply]

Matching Socks in the Dark

You are in a dark room with no light. You have 19 grey socks and 25 black socks. What are the chances you will get a matching pair?

This is a question I came across on a list of wacky/difficult interview questions. Assuming it's not a trick (the question doesn't explicitly say you can only grab two socks, maybe I have a flashlight, etc.), I think the answer is 942/1122, but that seems too messy for a question like this. Anyone care to confirm/deny/correct?--68.51.73.79 (talk) 06:31, 1 February 2011 (UTC)[reply]

Doesn't seem messy. Calculate probability that both socks are grey. Calculate probability that both socks are black. Add together. Done. 71.141.88.54 (talk) 07:00, 1 February 2011 (UTC)[reply]
942/1122 is not too messy, but it's implausibly large. I got the same result at first -- it seems that we both thought that 19+25 was 34 . But it's really 44, so the chances are 942/1892. –Henning Makholm (talk) 09:38, 1 February 2011 (UTC)[reply]
[ec] I got . -- Meni Rosenfeld (talk) 09:39, 1 February 2011 (UTC)[reply]

Area of a quadrilateral

A Quadrilateral with sides taken in order in a plane ,Whose equations are given.What is the Formula for finding Area of Quadrilateral in terms of Coefficents of variables present in equaion of lines(sides).In fact i have find out a simple formula for area of triangle whose equation of sides are given .now I want to get a general formula.if no one has find out it ,then i will try for it for my interest. TRue Path Finder

First find the coordinates of the corners by solving each pair of equations for neighboring sides using Cramer's rule. Then insert into the shoelace formula. –Henning Makholm (talk) 19:49, 1 February 2011 (UTC)[reply]

I know this method ,but i want a general formula for quadrilateral.Also for Pentagon,hexagon ,... if equation of sides are given. — Preceding unsigned comment added by True path finder (talkcontribs) 02:32, 2 February 2011 (UTC)[reply]

Well, for a set number n of sides, substitute letters for the coefficients in your formulae, and follow Henning Malcolm's method. The result will be a formula for the area in terms of the equations' coefficients. (That doesn't work for an arbitrary number of sides. (Well, if you do it for a few values of n and see a pattern (but I doubt you will), then you can try to prove a more general rule, e.g. by induction on n. (How does the area change when we add a side?))) 99.40.234.78 (talk) 08:43, 2 February 2011 (UTC)[reply]

February 2

We recently went over the twelvefold way. I calculated two of the boxes in a way that my professor doesn't necessarily agree with but I got the correct answer both times and I think it makes perfect sense. It's not for a grade, so not a big deal but I just want to make sure my thinking makes sense.

In the case of N indistinguishable, X distinguishable, and arbitrary function, I got the answer . Then, to find the answer for a surjective function (same conditions on N and X), I thought of putting 1 ball in each of the x urns to start out. The balls are indistinguishable and there must be at least 1 in each urn so I don't see any problem. Then you have n - x balls and no restrictions so it's like the arbitrary case with n - x balls and x urns, so you get the answer . My professor says he's not sure about double counting certain ways to do it. I don't think that comes into affect here. All that really matters is how many balls end up in the various urns. I did a similar solution for the case where N and X are both indistinguishable. That is, I put 1 ball in each urn and was left with n - x balls and x urns so I plugged these into the arbitrary formula, and again I got the correct answer. StatisticsMan (talk) 02:31, 2 February 2011 (UTC)[reply]

please explain the question ,what you have to find out,permutation,combination or others . — Preceding unsigned comment added by True path finder (talkcontribs) 02:42, 2 February 2011 (UTC)[reply]
If you click the link, it explains it all. The point is you have a set N and a set X and you want to count the number of functions from N to X under various conditions. The twelve ways come because the elements of N can be distinguishable or undistinguishable, similarly with X, and then the functions are either arbitrary, or require that they are injective, or require that they are surjective. So, 2 * 2 * 3 = 12 different types here. And, we want to count the number of such functions in each of the 12 cases. StatisticsMan (talk) 02:46, 2 February 2011 (UTC)[reply]
I think your explanation for is valid. —Bkell (talk) 03:22, 2 February 2011 (UTC)[reply]

Quick request for proof

Could someone please show me a quick proof of why a normal (normal: ) bounded linear operator T: on Hilbert space X, every element of the spectrum is an approximate eigenvalue - i.e. such that ()? I feel like the proof should be fairly quick and obvious - I know it is true for self-adjoint operators (such as ), but that doesn't help me in a way I can see, though I suspect it is the reason behind the proof. Would someone mind showing me quickly why this is the case? Thankyou! :) Otherlobby17 (talk) 04:36, 2 February 2011 (UTC)[reply]

Do you have the spectral theorem for bounded normal operators available? If so, that proves your property in much the same way as for self-adjoint operators. –Henning Makholm (talk) 10:54, 2 February 2011 (UTC)[reply]
Yes, though I found another way to solve it anyway - thankyou! Otherlobby17 (talk) 16:07, 2 February 2011 (UTC)[reply]

Line segment

In Line_segment#Definition it says that a line segment from a=(x1,y1) to b=(x2,y2) can be parametrized as {a+tb: t in [0,1]}. What is the proof for this? Thanks-210.212.167.113 (talk) 04:37, 2 February 2011 (UTC)[reply]

That's given as the definition of "line segment" so can't be proven (and doesn't require proof).—msh210 07:59, 2 February 2011 (UTC)[reply]
That's not quite correct; what you wrote is the line segment from a to a+b. The line segment from a to b is parametrized by {a + t(b-a): t in [0,1]}. When you see b-a, you should think of it as the vector from a to b. (The vector from one point to another is given by (endpoint)-(startpoint).) So, the line segment parametrization tells us we start at a (when t=0), then add some fraction of the vector from a to b. 146.186.131.121 (talk) 12:06, 2 February 2011 (UTC)[reply]
Er, yeah, quite right.—msh210 19:14, 2 February 2011 (UTC)[reply]

Stats and probability

What is a good, rigorous text that covers stats and probability (together, of course) from the very basics up? Thanks. 24.92.70.160 (talk) 05:23, 2 February 2011 (UTC)[reply]

I've been using David Stirzaker's Probability and random variables: A beginner's guide. It doesn't cover statistics, but it's perfect as an introduction to probability. —Anonymous DissidentTalk 12:54, 2 February 2011 (UTC)[reply]
I'm a fan of the book I used in my first stats class, it covers probability and probability calculations as well as most applications of statistical inference. It does not get into specifics of probability theory. The book is "Statistics: Concepts and Methods" by authors D. Monrad, W. Stout, and E. Harner. Cliff (talk) 17:35, 4 February 2011 (UTC)[reply]

Sectitive and Quantative Numbers

Before you say anything this is just the theory of an 11 year old boy. First I must introduce you to sectitive and quantative numbers. Picture a cup, with the brim resembling infinity and an ordinary liquid, say water, resembling positive and negative numbers. There is water in the cup, but it can only be filled to the brim at the most. So, that means you can have any number, ranging from infinity to negative infinity, in the cup. Let's say you have the same cup, but liquid helium being sectitive and quantative numbers. Liquid helium is in the cup, but slowly creeps out, defying viscosity. The liquid helium on the outside of the cup is the sectitive and quuantative numbers. From now on I will refer to them as the "extreme numbers." The extreme numbers went above the brim and beyond, making them larger than infinity. Sectitive numbers and quantitive numbers are opposites, sectitive numbers are smaller than negative infinity and quantative numbers are larger than positive infinity, therefore, the term "extreme numbers." The sign of a sectitive number is a division sign and a quantative sign is a multiplication sign. (Please excuse me, since I do not have multiplication and division signs on my keyboard, I will use % for division and x for multiplication.) So, quantative 25, or 25 over infinity, is x25. Sectitive 25, or 25 below negative infinity, is %25. But even the extreme numbers must have an ending point, and I call that omnity. the symbol for omnity is an upside down micro sign. (See micrometer for symbol.) So that concludes my definition of extreme numbers, I hope you liked my theory. —Preceding unsigned comment added by 205.237.144.164 (talk) 07:45, 2 February 2011 (UTC)[reply]

There's a good deal of real mathematical study of numbers "greater than infinity" and "less than negative infinity". See the article on ordinal numbers for a start.—msh210 07:53, 2 February 2011 (UTC)[reply]
Also hyperreal numbers. -- Meni Rosenfeld (talk) 08:28, 2 February 2011 (UTC)[reply]
Conformal Cyclic Cosmology might be fun for you to wrap your mind round too. Dmcq (talk) 09:47, 2 February 2011 (UTC)[reply]
Okay, so you've told us what the words and symbols for your new numbers are, but how do they behave? What can we do with them? Can we add and subtract? Multiply? What is ×38 plus -500? What is 2 times ÷13? Is that the same as ÷13 plus ÷13? –Henning Makholm (talk) 09:57, 2 February 2011 (UTC)[reply]
Yes, we'd like to see how they work. I hope the OP comes back to tell us. SemanticMantis (talk) 17:18, 3 February 2011 (UTC)[reply]

February 3

IPv4 vs IPv6 for dumdums?

Okay, will someone explain in simple-ish terms what the differences between IPv4 and IPv6 are, why they're incompatible, what switching over entails, what the format is for IPv6 addresses, etc.? I know how to use a computer and the Internet, so don't worry about that. I know what an IP address is, and that we've almost completely exhausted all the IPv4 ones. The IPv6 article kind of assumes you know about DNS and have technical knowledge of IPv4... --- c y m r u . l a s s (talk me, stalk me) 05:17, 3 February 2011 (UTC)[reply]

What it means: the end of the world as we know it. IPV4 addresses look like four octets separated by dots, like 123.21.12.34. IPV6 has 32-bit hex numbers separated by colons, like 2001:0db8:85a3:0000:0000:8a2e:0370:7334, but there are some shortcuts for compressing out blocks of zeros in them, etc. See IPv6 address. The basic issues are comparable to what would happen if the phone company suddenly had to start giving out phone numbers with 4x as many digits as the old ones. Everybody's phones and switches would have to be reprogrammed, etc, stuff will break all over, sort of a Y2K problem but "for real" since almost nobody has actually been getting ready for it. 71.141.88.54 (talk) 06:08, 3 February 2011 (UTC)[reply]
Huh. Interesting. Is there any way to slowly ease into it, or is it just gonna be a "boom, we're screwed" thing? And aren't some computers already using it? --- c y m r u . l a s s (talk me, stalk me) 06:41, 3 February 2011 (UTC)[reply]
See this article by Worthen and Tuna, 'Web running out of addresses,' from the Wall Street Journal on 1 February. EdJohnston (talk) 06:57, 3 February 2011 (UTC)[reply]
People with existing v4 addresses will be able to keep using them, but it will be very difficult to get new ones (imagining a mob-operated black market in them is only slightly fanciful). But almost nobody has been willing to deal with the hassle of v6 transition since the coming v4 exhaustion hasn't happened yet. There will be a transitional period of riots, looting teeth gnashing, router reconfiguration confusion, overloaded tech support lines, web site outages, that sort of thing, plus lots of vendors cashing in on opportunities to sell you new crap instead of fixing your old crap. Lots of old systems will simply fall off of the internet. Eventually, though, v6 will work on all newer systems. Added: and yes, v6 has been available for years, with the idea of "start using it now and beat the crunch". But nobody has been bothering because they all have other stuff to fix "right now" while the v4 network is still humming along. It's just like putting off homework til the last minute. 71.141.88.54 (talk) 07:46, 3 February 2011 (UTC)[reply]
And we all know how well that goes... Hmm I'm wondering if it would be a good idea/worth the trouble to switch over. That is, if a lowly random like myself can actually do that :P --- c y m r u . l a s s (talk me, stalk me) 08:08, 3 February 2011 (UTC)[reply]
It is mostly your ISP and websites you visit who will have to deal with it. If you're running a very old OS like Windows ME you may have to upgrade, and you may also have a problem if you have an old router. Otherwise just prepare to deal with crappy service and software patches for a while, as problems get ironed out. If you're running a recent OS, it should be v6-ready, but various software applications may need upgrades. Sam Bowne has some very good materials here. 71.141.88.54 (talk) 08:46, 3 February 2011 (UTC)[reply]
By the way, this question probably should have gone to the computer ref desk rather than the math desk. 71.141.88.54 (talk) 08:47, 3 February 2011 (UTC)[reply]

supremum and infimum of a quotient of two series

I need help translating the following into English:

Given two sequences of positive real numbers and , we write

  • if they satisfy
  • whereas if they satisfy .
  • On the other hand means that
  • and means that

I don't understand. I've read the entries on supremum and infimum but I cant work out what they mean in this particular context... can somebody help break it down in simple terms? —Preceding unsigned comment added by 118.208.23.171 (talk) 06:32, 3 February 2011 (UTC)[reply]

Fix your typos first. —Preceding unsigned comment added by 99.40.234.78 (talk) 06:53, 3 February 2011 (UTC)[reply]
Firstly, in case it isn't clear, there are four definitions here. I've edited the formatting to make this clear, and fixed a typo (you had typed in place of ). Have you read the entry on Limit superior and limit inferior#The case of sequences of real numbers? Do you have trouble only with the first two definitions, or all four of them? Shreevatsa (talk) 07:04, 3 February 2011 (UTC)[reply]
Apologies for the typos. My guess now is that there are two series of equal length and , and a new series is created by dividing each element of the first by the corresponding element of the second. The third definition applies in the case that this new series converges to 0 and the fourth definition applies in the case that the new series diverges to infinity. However I am clueless as to the first two definitions, even after reading the article on limit superior and limit inferior.
Yes, "equal length" in that both are (countably) infinite. Unfolding the definitions of lim inf and lim sub, and collapsing some nested quantifiers that don't matter in this context, we get:
  • an = O(bn) means that there is some finite number K such that all an/bn are smaller than K.
  • an = Ω(bn) is the opposite: there is some K>0 such that all an/bn are larger than K.
Beware, incidentally, that this notation is well-established but horribly abuses the equals sign. For a given sequence (bn)n, there are many different sequences that are all O(bn). To make sense of it you should consider each separate instance of O(···) in an equation to mean some unknown series that happens to satisfy the condition. –Henning Makholm (talk) 09:54, 3 February 2011 (UTC)[reply]
Thanks! —Preceding unsigned comment added by 118.208.23.171 (talk) 10:26, 3 February 2011 (UTC)[reply]

Made some further corrections to the notation (LaTeX \limsup, \liminf instead of '\lim \sup' and 'lim \inf'). --CiaPan (talk) 12:54, 3 February 2011 (UTC)[reply]

Affine Grassmanian

Resolved

What's the dimension of the affine Grassmannian AGr(n,k) which consists of all k-dimensional affine subspaces of Rn? In particular, I'm interested in AGr(n,1), i.e. the space of affine one-spaces in Rn, or equivalently the space unorientated lines in Rn. Our article says that, as a homogeneous space, we can identify AGr(n,k) with the quotient

where E(n) is the Euclidean group on Rn and O(m) is the orthogonal group on Rm. Doing a quick dimension check, this seems to imply that

but that's not right because AGr(2,1) is diffeomorphic to the Möbius band, and so has dimension two (the lines have equations ax + by = c, which corresponds to (a:b:c), but we must delete (0:0:1) because a = b = 0 doesn't give a valid line. Then the projective plane minus a point is the Möbius band). While the dimension formula gives the dimension to be one. My guess is that AGr(n,1) has dimension 2(n–1). I know that the space of oriented lines in Rn is diffeomorphic to the tangent bundle to the (n–1)-sphere and that that gives a double cover of the space of unoriented lines. Fly by Night (talk) 13:16, 3 February 2011 (UTC)[reply]

I get (k+1)(n-k) using a heuristic, degrees of freedom approach. It's not rigorous but it agrees with your guess.--RDBury (talk) 20:36, 3 February 2011 (UTC)[reply]
Yeah, you're exactly right. I misunderstood this section of the article on the orthogonal groups. I think it might need some work that section. In fact, it turns out that
Since AGr(n,k) ≅ E(n) /  [ E(k) × O(nk) ] it follows that
as you quite rightly said. Thanks for that RDBury; it's appreciated. Fly by Night (talk) 00:51, 4 February 2011 (UTC)[reply]

Journal article author affiliation

The article submission guidelines of a mathematics journal says that author affiliations are to be given at the end of the article. Two authors from the same place ( same address) are submitting an article. One of them has another affiliation (permanent address). What is the best way (better than giving the common address of the authors twice) to give the affiliations? 14.139.128.15 (talk) 17:19, 3 February 2011 (UTC)[reply]

One way to handle it is like this: present author names at start, e.g. Joe Author*, Jane Math*†. At the end, you only need to indicate each affiliation once, e.g. *Affiliation 1, †Affiliation 2. If you want further help, you could name the journal. Also, are you using LaTeX? Many math journals provide authors with templates, .cls files, etc. Lastly, it probably isn't that important for submission. They will not reject you out of hand for some formatting issue. If your work is accepted, then they will work with you to make it right before it is published. SemanticMantis (talk) 20:07, 3 February 2011 (UTC)[reply]

Contour integral

My textbook is trying to show that . The author does this by setting up an integral over the complex plane, over a curve that is more-or-less a semi-circle of radius R, centered at the origin, with a diameter along the real axis (looks like an upside-down bowl). For the proof, he uses that .

I don't see why this would be true. 74.15.137.130 (talk) 20:49, 3 February 2011 (UTC)[reply]

The denominators are the same, so we only need to establish the inequality for the numerators. The complex exponential exp(ix) lies on the unit circle of the complex plane (see diagram in linked article). So abs(exp(ix))=1. Is that enough? SemanticMantis (talk) 21:31, 3 February 2011 (UTC)[reply]
But z is generally complex. At least, the author uses this inequality over the part of the path that is in the complex plane. So wouldn't it have a norm greater than 1? 74.15.137.130 (talk) 23:17, 3 February 2011 (UTC)[reply]
My mistake, I misread the inequality. What do you know about the norm of z along the path that you're integrating along? This will give you bounds on exp(iz) that will probably do the trick. Try using the Laurent series of the exponential to get some cancellation with the denominators. SemanticMantis (talk) 00:58, 4 February 2011 (UTC)[reply]
If the path is in the upper half-plane, write z = x + iy, with y ≥ 0. Then |eiz| = |e-y + ix| = e-y ≤ 1. 82.124.101.35 (talk) 03:11, 4 February 2011 (UTC)[reply]

Made some minor corrections to LaTeX notation (added \left and \right to vertical bars, so their height corresponds to what they contain). --CiaPan (talk) 07:06, 4 February 2011 (UTC)[reply]

matrix/vector algebra problem

I am reading a paper, and there is one part that I cannot follow.

Here we have an equation

where are the entries of the stochastic matrix , is a constant, is a constant between 0 and 1, and

The above equation can be written in vector form, which I assume for will look like:

Now a row vector is introduced where

The entries of this row vector sum to one.

Both sides of the equation are then premultiplied by this vector which apparently gives

where

Now I cannot figure out how they got this equation from multiplying both sides by . Seemingly both sides have also been divided by , so I can understand what happened with the left hand side, and the and terms. But I cannot understand what happened to the and terms, so that nothing else was left except

The paper does not explain this. Does it follow from the information given? —Preceding unsigned comment added by 130.102.78.164 (talk) 04:44, 4 February 2011 (UTC)[reply]

Yes, it's clearly designed to. Let P be the (row) vector of log(pi)'s. Then in the original equation, the terms give the elements of the vector . The left factor of this cancels out with in the defintion of v', so what is left after you multiply is . –Henning Makholm (talk) 09:50, 4 February 2011 (UTC)[reply]

Love heart

The graph of

looks like a love heart. Is there a more proper name for this curve, or for a general family to which it belongs? Thanks. (For reference: WolframAlpha.) —Anonymous DissidentTalk 12:27, 4 February 2011 (UTC)[reply]

If you want to say "heart-shaped" and still sound like an intellectual, you can say "cardioid" :) But cardioid has a specific meaning which is not the same as your curve. Tinfoilcat (talk) 12:53, 4 February 2011 (UTC)[reply]
See also the MathWorld page.--RDBury (talk) 14:20, 4 February 2011 (UTC)[reply]

Finding range of 7+5cos5x divided by 8-6cos6x

Normally to find critical points we use dy/dx=0 then from critical points and the monotonicity of the function we try to find the range, but here, I am unable to find the critical points.

I got dy/dx =  150(sin5x)(cos6x) - 200sin5x - 252sin6x - 180(sin6x)(cos5x)  whole divided by  (8-6cos6x)^2
From here, I am unable to find the value of x for which it is 0

Can you please help me with this? — Preceding unsigned comment added by Krishnashyam1994 (talkcontribs) 15:11, 4 February 2011 (UTC)[reply]

I don't see why this can't be solved by taking the range of the numerator and denominator since both are cyclic and not synchronized with one another. The numerator has a range of [2,12]. The denominator has a range [2,14]. So, the function's range is [2/14, 12/2] = [0.14, 6]. Doing a quick plot, it does hit both of those extremes, but does not cross them. -- kainaw 15:25, 4 February 2011 (UTC)[reply]

Coin nominal values

Why is it not good to have both 20 cent and 25 cent coins in a single currency? --84.61.176.167 (talk) 15:33, 4 February 2011 (UTC)[reply]

This is a classic example of the greedy algorithm when it comes to making change. If you want to give 40 cents in change and you have standard American currency of 1, 5, 10, and 25 cents, the least number of coins will be 1x25, 2x10. If you add a 20 cent coin, the least number of coins will be 2x20 - which requires use of something other than the greedy algorithm. -- kainaw 15:36, 4 February 2011 (UTC)[reply]
You have to define what you mean by "good". You could have a coin in every denomination from 1 cent to 99 cents, and then you could always make change with a single coin. But that benefit would be outweighed by the problems of having 99 different types of coins to keep track of. At the other extreme you just have a 1 cent coin and it would take up to 99 to make change. There are other factors such as people find it easier to do arithmetic with multiples of 5 and 10. With various conflicting objectives, some of which are cultural, the question doesn't make sense mathematically. You could ask a more specific question like "Given there are to be 4 denominations, what should they be to minimize the average number of coins needed to make change?" Then you might then be able to show there is no solution that uses both a 20 and 25 cent coin.--RDBury (talk) 17:27, 4 February 2011 (UTC)[reply]