(Sharp) inequality for Beta function












12














I am trying to prove the following inequality concerning the Beta Function:
$$
alpha x^alpha B(alpha, xalpha) geq 1 quad forall 0 < alpha leq 1, x > 0,
$$

where as usual $B(a,b) = int_0^1 t^{a-1}(1-t)^{b-1}dt$.



In fact, I only need this inequality when $x$ is large enough, but it empirically seems to be true for all $x$.



The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of $x$ (say between 0 and $10^{10}$). For example, for $x=100$, the plot is:



Plot of the function to be proven greater than 1



Varying $x$, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around $1.5$ (but I do not need any such reverse inequality).



I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link $B(a,b)$ with $frac{1}{ab}$, which is quite different from what I am looking for, and also only holds true when both $a$ and $b$ are smaller than 1, which is not my setting.



I have tried the following to prove it, but without success: the inequality is well-known to be an equality when $alpha = 1$, and the limit for $alpha to 0$ should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one $0 < alpha < 1$ where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the digamma function $psi$ as:
$$
x^alpha B(alpha, xalpha) Big(alpha psi(alpha) - (x+1)alphapsi((x+1)alpha) + xalpha psi(xalpha) + 1 + alpha log x Big).
$$

Dividing by $x^alpha B(alpha, xalpha) alpha$, this becomes
$$
-f(alpha) + frac{1}{alpha} + log x,
$$

where $f(alpha) = -psi(alpha) + (x+1)psi((x+1)alpha) - x psi(xalpha)$ is, as proven by Alzer and Berg, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as $f(alpha)$ and $frac{1}{alpha} + C$) can vanish in arbitrarily many points, therefore this does not allow to conclude.



Many thanks in advance for any hint on how to get such a bound!



[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.










share|cite|improve this question
























  • There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
    – GH from MO
    Dec 29 '18 at 21:58






  • 1




    @GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
    – Ester Mariucci
    Dec 29 '18 at 22:14
















12














I am trying to prove the following inequality concerning the Beta Function:
$$
alpha x^alpha B(alpha, xalpha) geq 1 quad forall 0 < alpha leq 1, x > 0,
$$

where as usual $B(a,b) = int_0^1 t^{a-1}(1-t)^{b-1}dt$.



In fact, I only need this inequality when $x$ is large enough, but it empirically seems to be true for all $x$.



The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of $x$ (say between 0 and $10^{10}$). For example, for $x=100$, the plot is:



Plot of the function to be proven greater than 1



Varying $x$, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around $1.5$ (but I do not need any such reverse inequality).



I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link $B(a,b)$ with $frac{1}{ab}$, which is quite different from what I am looking for, and also only holds true when both $a$ and $b$ are smaller than 1, which is not my setting.



I have tried the following to prove it, but without success: the inequality is well-known to be an equality when $alpha = 1$, and the limit for $alpha to 0$ should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one $0 < alpha < 1$ where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the digamma function $psi$ as:
$$
x^alpha B(alpha, xalpha) Big(alpha psi(alpha) - (x+1)alphapsi((x+1)alpha) + xalpha psi(xalpha) + 1 + alpha log x Big).
$$

Dividing by $x^alpha B(alpha, xalpha) alpha$, this becomes
$$
-f(alpha) + frac{1}{alpha} + log x,
$$

where $f(alpha) = -psi(alpha) + (x+1)psi((x+1)alpha) - x psi(xalpha)$ is, as proven by Alzer and Berg, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as $f(alpha)$ and $frac{1}{alpha} + C$) can vanish in arbitrarily many points, therefore this does not allow to conclude.



Many thanks in advance for any hint on how to get such a bound!



[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.










share|cite|improve this question
























  • There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
    – GH from MO
    Dec 29 '18 at 21:58






  • 1




    @GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
    – Ester Mariucci
    Dec 29 '18 at 22:14














12












12








12


2





I am trying to prove the following inequality concerning the Beta Function:
$$
alpha x^alpha B(alpha, xalpha) geq 1 quad forall 0 < alpha leq 1, x > 0,
$$

where as usual $B(a,b) = int_0^1 t^{a-1}(1-t)^{b-1}dt$.



In fact, I only need this inequality when $x$ is large enough, but it empirically seems to be true for all $x$.



The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of $x$ (say between 0 and $10^{10}$). For example, for $x=100$, the plot is:



Plot of the function to be proven greater than 1



Varying $x$, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around $1.5$ (but I do not need any such reverse inequality).



I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link $B(a,b)$ with $frac{1}{ab}$, which is quite different from what I am looking for, and also only holds true when both $a$ and $b$ are smaller than 1, which is not my setting.



I have tried the following to prove it, but without success: the inequality is well-known to be an equality when $alpha = 1$, and the limit for $alpha to 0$ should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one $0 < alpha < 1$ where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the digamma function $psi$ as:
$$
x^alpha B(alpha, xalpha) Big(alpha psi(alpha) - (x+1)alphapsi((x+1)alpha) + xalpha psi(xalpha) + 1 + alpha log x Big).
$$

Dividing by $x^alpha B(alpha, xalpha) alpha$, this becomes
$$
-f(alpha) + frac{1}{alpha} + log x,
$$

where $f(alpha) = -psi(alpha) + (x+1)psi((x+1)alpha) - x psi(xalpha)$ is, as proven by Alzer and Berg, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as $f(alpha)$ and $frac{1}{alpha} + C$) can vanish in arbitrarily many points, therefore this does not allow to conclude.



Many thanks in advance for any hint on how to get such a bound!



[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.










share|cite|improve this question















I am trying to prove the following inequality concerning the Beta Function:
$$
alpha x^alpha B(alpha, xalpha) geq 1 quad forall 0 < alpha leq 1, x > 0,
$$

where as usual $B(a,b) = int_0^1 t^{a-1}(1-t)^{b-1}dt$.



In fact, I only need this inequality when $x$ is large enough, but it empirically seems to be true for all $x$.



The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of $x$ (say between 0 and $10^{10}$). For example, for $x=100$, the plot is:



Plot of the function to be proven greater than 1



Varying $x$, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around $1.5$ (but I do not need any such reverse inequality).



I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link $B(a,b)$ with $frac{1}{ab}$, which is quite different from what I am looking for, and also only holds true when both $a$ and $b$ are smaller than 1, which is not my setting.



I have tried the following to prove it, but without success: the inequality is well-known to be an equality when $alpha = 1$, and the limit for $alpha to 0$ should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one $0 < alpha < 1$ where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the digamma function $psi$ as:
$$
x^alpha B(alpha, xalpha) Big(alpha psi(alpha) - (x+1)alphapsi((x+1)alpha) + xalpha psi(xalpha) + 1 + alpha log x Big).
$$

Dividing by $x^alpha B(alpha, xalpha) alpha$, this becomes
$$
-f(alpha) + frac{1}{alpha} + log x,
$$

where $f(alpha) = -psi(alpha) + (x+1)psi((x+1)alpha) - x psi(xalpha)$ is, as proven by Alzer and Berg, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as $f(alpha)$ and $frac{1}{alpha} + C$) can vanish in arbitrarily many points, therefore this does not allow to conclude.



Many thanks in advance for any hint on how to get such a bound!



[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.







reference-request ca.classical-analysis-and-odes inequalities special-functions






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 29 '18 at 22:13

























asked Dec 29 '18 at 20:39









Ester Mariucci

955




955












  • There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
    – GH from MO
    Dec 29 '18 at 21:58






  • 1




    @GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
    – Ester Mariucci
    Dec 29 '18 at 22:14


















  • There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
    – GH from MO
    Dec 29 '18 at 21:58






  • 1




    @GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
    – Ester Mariucci
    Dec 29 '18 at 22:14
















There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
– GH from MO
Dec 29 '18 at 21:58




There is no Theorem 4.1 in the quoted paper. There is a Theorem 4 there, but it does not talk about the digamma function. Can you please clarify?
– GH from MO
Dec 29 '18 at 21:58




1




1




@GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
– Ester Mariucci
Dec 29 '18 at 22:14




@GHfromMO Thanks for pointing out the wrong link, the one I inserted did not send to the most updated version. I have now corrected it!
– Ester Mariucci
Dec 29 '18 at 22:14










3 Answers
3






active

oldest

votes


















6














You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:



Let $mu,nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $mu{f>s}ge nu{g>s}$ for $sle s_0$ and the reverse inequality holds for $sge s_0$. Suppose also that $int f^q,dmu=int g^q,dnu<+infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $int f^p,dmuge int g^p,dnu$ for $0<ple q$ and the reverse inequality holds for $pge q$.



The proof of the lemma is rather straightforward. Let $ple q$ (that is the case you are really interested in)
$$
int f^p,dmu-int g^p,dnu=pint_0^infty s^p[mu{f>s}-nu{g>s}]frac{ds}s
\
=pint_0^infty [s^p-s_0^{p-q}s^q][mu{f>s}-nu{g>s}]frac{ds}sge 0,.
$$



Now we use it with $f(t)=t(1-t)^x$, $dmu=frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $dnu=frac{dt}{t}$ on $(0,frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=frac{1}{x+1}$, we see that the function $smapsto mu{f>s}$ drops to $0$ before the function $smapsto nu{g>s}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies
$$
duleft(frac 1u-frac x{1-u}right)=frac{ds}s,,
$$

so trivially
$$
frac{du}{u(1-u)}ge frac{du}u>frac {ds}s
$$

The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<ple 1$ (which plays the role of $alpha$), we have the desired inequality.






share|cite|improve this answer





















  • That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
    – Ester Mariucci
    yesterday



















5














One can also use Jensen's inequality. Let (for $sigma>0$) $G_sigma$ denote a random variable with $Gamma(1,sigma)$-distribution, i.e. having Lebesgue density
$$f_sigma(t)=frac{t^{sigma-1}}{Gamma(sigma)} e^{-t};1_{(0,infty)}(t);,$$
then $mathbb{E}(G_sigma)=sigma$.
Since $alphain (0,1)$ the functions $tmapsto t^alpha$ resp. $tmapsto t^{1-alpha}$ on $mathbb{R}_+$ are concave. By Jensen's inequality
$$frac{Gamma(alpha+alpha x)}{Gamma(alpha x)}=mathbb{E}(G_{xalpha}^alpha)leq left(mathbb{E}(G_{xalpha})right)^alpha=(xalpha)^{alpha}$$



and
$$frac{1}{Gamma(alpha)}=mathbb{E} G_alpha^{1-alpha}leqleft(mathbb{E}(G_{alpha})right)^{1-alpha}=frac{1}{alpha^{alpha-1}}$$
Using that gives
$$B(alpha,x alpha)=frac{Gamma(alpha),Gamma(xalpha)}{Gamma(alpha +xalpha)}geq frac{Gamma(alpha)}{alpha^alpha x^alpha}geq frac{Gamma(alpha)}{alpha,Gamma(alpha),x^alpha}=frac{1}{alpha x^alpha},$$
as desired.






share|cite|improve this answer

















  • 1




    That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
    – Ester Mariucci
    yesterday



















4














This is an attempt to strengthen your claim.



If $x$ is large then $B(x,y)sim Gamma(y)x^{-y}$ and hence
$$B(alpha x,alpha)sim Gamma(alpha)(alpha x)^{-alpha};$$
where $Gamma(z)$ is the Euler Gamma function.



On the other hand, for small $alpha$, we have the expansion
$$Gamma(1+alpha)=1+alphaGamma'(1)+mathcal{O}(alpha^2).$$
Since $alphaGamma(alpha)=Gamma(1+alpha)$, it follows that
$$Gamma(alpha)sim frac1{alpha}-gamma+mathcal{O}(alpha)$$
where $gamma$ is the Euler constant.



We may now combine the above two estimates to obtain
$$alpha x^{alpha}B(alpha x,alpha)sim alpha x^{alpha}left(frac1{alpha}-gammaright)(alpha x)^{-alpha}=left(frac1{alpha}-gammaright)alpha^{1-alpha}geq1$$
provided $alpha$ is small enough. For example, $0<alpha<frac12$ works.






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "504"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f319725%2fsharp-inequality-for-beta-function%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    6














    You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:



    Let $mu,nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $mu{f>s}ge nu{g>s}$ for $sle s_0$ and the reverse inequality holds for $sge s_0$. Suppose also that $int f^q,dmu=int g^q,dnu<+infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $int f^p,dmuge int g^p,dnu$ for $0<ple q$ and the reverse inequality holds for $pge q$.



    The proof of the lemma is rather straightforward. Let $ple q$ (that is the case you are really interested in)
    $$
    int f^p,dmu-int g^p,dnu=pint_0^infty s^p[mu{f>s}-nu{g>s}]frac{ds}s
    \
    =pint_0^infty [s^p-s_0^{p-q}s^q][mu{f>s}-nu{g>s}]frac{ds}sge 0,.
    $$



    Now we use it with $f(t)=t(1-t)^x$, $dmu=frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $dnu=frac{dt}{t}$ on $(0,frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=frac{1}{x+1}$, we see that the function $smapsto mu{f>s}$ drops to $0$ before the function $smapsto nu{g>s}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies
    $$
    duleft(frac 1u-frac x{1-u}right)=frac{ds}s,,
    $$

    so trivially
    $$
    frac{du}{u(1-u)}ge frac{du}u>frac {ds}s
    $$

    The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<ple 1$ (which plays the role of $alpha$), we have the desired inequality.






    share|cite|improve this answer





















    • That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
      – Ester Mariucci
      yesterday
















    6














    You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:



    Let $mu,nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $mu{f>s}ge nu{g>s}$ for $sle s_0$ and the reverse inequality holds for $sge s_0$. Suppose also that $int f^q,dmu=int g^q,dnu<+infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $int f^p,dmuge int g^p,dnu$ for $0<ple q$ and the reverse inequality holds for $pge q$.



    The proof of the lemma is rather straightforward. Let $ple q$ (that is the case you are really interested in)
    $$
    int f^p,dmu-int g^p,dnu=pint_0^infty s^p[mu{f>s}-nu{g>s}]frac{ds}s
    \
    =pint_0^infty [s^p-s_0^{p-q}s^q][mu{f>s}-nu{g>s}]frac{ds}sge 0,.
    $$



    Now we use it with $f(t)=t(1-t)^x$, $dmu=frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $dnu=frac{dt}{t}$ on $(0,frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=frac{1}{x+1}$, we see that the function $smapsto mu{f>s}$ drops to $0$ before the function $smapsto nu{g>s}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies
    $$
    duleft(frac 1u-frac x{1-u}right)=frac{ds}s,,
    $$

    so trivially
    $$
    frac{du}{u(1-u)}ge frac{du}u>frac {ds}s
    $$

    The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<ple 1$ (which plays the role of $alpha$), we have the desired inequality.






    share|cite|improve this answer





















    • That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
      – Ester Mariucci
      yesterday














    6












    6








    6






    You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:



    Let $mu,nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $mu{f>s}ge nu{g>s}$ for $sle s_0$ and the reverse inequality holds for $sge s_0$. Suppose also that $int f^q,dmu=int g^q,dnu<+infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $int f^p,dmuge int g^p,dnu$ for $0<ple q$ and the reverse inequality holds for $pge q$.



    The proof of the lemma is rather straightforward. Let $ple q$ (that is the case you are really interested in)
    $$
    int f^p,dmu-int g^p,dnu=pint_0^infty s^p[mu{f>s}-nu{g>s}]frac{ds}s
    \
    =pint_0^infty [s^p-s_0^{p-q}s^q][mu{f>s}-nu{g>s}]frac{ds}sge 0,.
    $$



    Now we use it with $f(t)=t(1-t)^x$, $dmu=frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $dnu=frac{dt}{t}$ on $(0,frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=frac{1}{x+1}$, we see that the function $smapsto mu{f>s}$ drops to $0$ before the function $smapsto nu{g>s}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies
    $$
    duleft(frac 1u-frac x{1-u}right)=frac{ds}s,,
    $$

    so trivially
    $$
    frac{du}{u(1-u)}ge frac{du}u>frac {ds}s
    $$

    The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<ple 1$ (which plays the role of $alpha$), we have the desired inequality.






    share|cite|improve this answer












    You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:



    Let $mu,nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $mu{f>s}ge nu{g>s}$ for $sle s_0$ and the reverse inequality holds for $sge s_0$. Suppose also that $int f^q,dmu=int g^q,dnu<+infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $int f^p,dmuge int g^p,dnu$ for $0<ple q$ and the reverse inequality holds for $pge q$.



    The proof of the lemma is rather straightforward. Let $ple q$ (that is the case you are really interested in)
    $$
    int f^p,dmu-int g^p,dnu=pint_0^infty s^p[mu{f>s}-nu{g>s}]frac{ds}s
    \
    =pint_0^infty [s^p-s_0^{p-q}s^q][mu{f>s}-nu{g>s}]frac{ds}sge 0,.
    $$



    Now we use it with $f(t)=t(1-t)^x$, $dmu=frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $dnu=frac{dt}{t}$ on $(0,frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=frac{1}{x+1}$, we see that the function $smapsto mu{f>s}$ drops to $0$ before the function $smapsto nu{g>s}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies
    $$
    duleft(frac 1u-frac x{1-u}right)=frac{ds}s,,
    $$

    so trivially
    $$
    frac{du}{u(1-u)}ge frac{du}u>frac {ds}s
    $$

    The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<ple 1$ (which plays the role of $alpha$), we have the desired inequality.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 2 days ago









    fedja

    37.5k7109203




    37.5k7109203












    • That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
      – Ester Mariucci
      yesterday


















    • That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
      – Ester Mariucci
      yesterday
















    That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
    – Ester Mariucci
    yesterday




    That's a wonderful and clever proof, many thanks! I'm marking this as accepted (although @egs's one also proves the desired inequality) also because it came first.
    – Ester Mariucci
    yesterday











    5














    One can also use Jensen's inequality. Let (for $sigma>0$) $G_sigma$ denote a random variable with $Gamma(1,sigma)$-distribution, i.e. having Lebesgue density
    $$f_sigma(t)=frac{t^{sigma-1}}{Gamma(sigma)} e^{-t};1_{(0,infty)}(t);,$$
    then $mathbb{E}(G_sigma)=sigma$.
    Since $alphain (0,1)$ the functions $tmapsto t^alpha$ resp. $tmapsto t^{1-alpha}$ on $mathbb{R}_+$ are concave. By Jensen's inequality
    $$frac{Gamma(alpha+alpha x)}{Gamma(alpha x)}=mathbb{E}(G_{xalpha}^alpha)leq left(mathbb{E}(G_{xalpha})right)^alpha=(xalpha)^{alpha}$$



    and
    $$frac{1}{Gamma(alpha)}=mathbb{E} G_alpha^{1-alpha}leqleft(mathbb{E}(G_{alpha})right)^{1-alpha}=frac{1}{alpha^{alpha-1}}$$
    Using that gives
    $$B(alpha,x alpha)=frac{Gamma(alpha),Gamma(xalpha)}{Gamma(alpha +xalpha)}geq frac{Gamma(alpha)}{alpha^alpha x^alpha}geq frac{Gamma(alpha)}{alpha,Gamma(alpha),x^alpha}=frac{1}{alpha x^alpha},$$
    as desired.






    share|cite|improve this answer

















    • 1




      That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
      – Ester Mariucci
      yesterday
















    5














    One can also use Jensen's inequality. Let (for $sigma>0$) $G_sigma$ denote a random variable with $Gamma(1,sigma)$-distribution, i.e. having Lebesgue density
    $$f_sigma(t)=frac{t^{sigma-1}}{Gamma(sigma)} e^{-t};1_{(0,infty)}(t);,$$
    then $mathbb{E}(G_sigma)=sigma$.
    Since $alphain (0,1)$ the functions $tmapsto t^alpha$ resp. $tmapsto t^{1-alpha}$ on $mathbb{R}_+$ are concave. By Jensen's inequality
    $$frac{Gamma(alpha+alpha x)}{Gamma(alpha x)}=mathbb{E}(G_{xalpha}^alpha)leq left(mathbb{E}(G_{xalpha})right)^alpha=(xalpha)^{alpha}$$



    and
    $$frac{1}{Gamma(alpha)}=mathbb{E} G_alpha^{1-alpha}leqleft(mathbb{E}(G_{alpha})right)^{1-alpha}=frac{1}{alpha^{alpha-1}}$$
    Using that gives
    $$B(alpha,x alpha)=frac{Gamma(alpha),Gamma(xalpha)}{Gamma(alpha +xalpha)}geq frac{Gamma(alpha)}{alpha^alpha x^alpha}geq frac{Gamma(alpha)}{alpha,Gamma(alpha),x^alpha}=frac{1}{alpha x^alpha},$$
    as desired.






    share|cite|improve this answer

















    • 1




      That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
      – Ester Mariucci
      yesterday














    5












    5








    5






    One can also use Jensen's inequality. Let (for $sigma>0$) $G_sigma$ denote a random variable with $Gamma(1,sigma)$-distribution, i.e. having Lebesgue density
    $$f_sigma(t)=frac{t^{sigma-1}}{Gamma(sigma)} e^{-t};1_{(0,infty)}(t);,$$
    then $mathbb{E}(G_sigma)=sigma$.
    Since $alphain (0,1)$ the functions $tmapsto t^alpha$ resp. $tmapsto t^{1-alpha}$ on $mathbb{R}_+$ are concave. By Jensen's inequality
    $$frac{Gamma(alpha+alpha x)}{Gamma(alpha x)}=mathbb{E}(G_{xalpha}^alpha)leq left(mathbb{E}(G_{xalpha})right)^alpha=(xalpha)^{alpha}$$



    and
    $$frac{1}{Gamma(alpha)}=mathbb{E} G_alpha^{1-alpha}leqleft(mathbb{E}(G_{alpha})right)^{1-alpha}=frac{1}{alpha^{alpha-1}}$$
    Using that gives
    $$B(alpha,x alpha)=frac{Gamma(alpha),Gamma(xalpha)}{Gamma(alpha +xalpha)}geq frac{Gamma(alpha)}{alpha^alpha x^alpha}geq frac{Gamma(alpha)}{alpha,Gamma(alpha),x^alpha}=frac{1}{alpha x^alpha},$$
    as desired.






    share|cite|improve this answer












    One can also use Jensen's inequality. Let (for $sigma>0$) $G_sigma$ denote a random variable with $Gamma(1,sigma)$-distribution, i.e. having Lebesgue density
    $$f_sigma(t)=frac{t^{sigma-1}}{Gamma(sigma)} e^{-t};1_{(0,infty)}(t);,$$
    then $mathbb{E}(G_sigma)=sigma$.
    Since $alphain (0,1)$ the functions $tmapsto t^alpha$ resp. $tmapsto t^{1-alpha}$ on $mathbb{R}_+$ are concave. By Jensen's inequality
    $$frac{Gamma(alpha+alpha x)}{Gamma(alpha x)}=mathbb{E}(G_{xalpha}^alpha)leq left(mathbb{E}(G_{xalpha})right)^alpha=(xalpha)^{alpha}$$



    and
    $$frac{1}{Gamma(alpha)}=mathbb{E} G_alpha^{1-alpha}leqleft(mathbb{E}(G_{alpha})right)^{1-alpha}=frac{1}{alpha^{alpha-1}}$$
    Using that gives
    $$B(alpha,x alpha)=frac{Gamma(alpha),Gamma(xalpha)}{Gamma(alpha +xalpha)}geq frac{Gamma(alpha)}{alpha^alpha x^alpha}geq frac{Gamma(alpha)}{alpha,Gamma(alpha),x^alpha}=frac{1}{alpha x^alpha},$$
    as desired.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered yesterday









    esg

    1,71147




    1,71147








    • 1




      That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
      – Ester Mariucci
      yesterday














    • 1




      That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
      – Ester Mariucci
      yesterday








    1




    1




    That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
    – Ester Mariucci
    yesterday




    That's a very quick way to prove it, thanks! This proof is of particular interest to me because I was also aiming to prove Gamma(alpha) geq alpha^{alpha+1}, which indeed is essentially equivalent to the inequality on Beta by using Stirling on both Gamma(xalpha) and Gamma((x+1)alpha).
    – Ester Mariucci
    yesterday











    4














    This is an attempt to strengthen your claim.



    If $x$ is large then $B(x,y)sim Gamma(y)x^{-y}$ and hence
    $$B(alpha x,alpha)sim Gamma(alpha)(alpha x)^{-alpha};$$
    where $Gamma(z)$ is the Euler Gamma function.



    On the other hand, for small $alpha$, we have the expansion
    $$Gamma(1+alpha)=1+alphaGamma'(1)+mathcal{O}(alpha^2).$$
    Since $alphaGamma(alpha)=Gamma(1+alpha)$, it follows that
    $$Gamma(alpha)sim frac1{alpha}-gamma+mathcal{O}(alpha)$$
    where $gamma$ is the Euler constant.



    We may now combine the above two estimates to obtain
    $$alpha x^{alpha}B(alpha x,alpha)sim alpha x^{alpha}left(frac1{alpha}-gammaright)(alpha x)^{-alpha}=left(frac1{alpha}-gammaright)alpha^{1-alpha}geq1$$
    provided $alpha$ is small enough. For example, $0<alpha<frac12$ works.






    share|cite|improve this answer




























      4














      This is an attempt to strengthen your claim.



      If $x$ is large then $B(x,y)sim Gamma(y)x^{-y}$ and hence
      $$B(alpha x,alpha)sim Gamma(alpha)(alpha x)^{-alpha};$$
      where $Gamma(z)$ is the Euler Gamma function.



      On the other hand, for small $alpha$, we have the expansion
      $$Gamma(1+alpha)=1+alphaGamma'(1)+mathcal{O}(alpha^2).$$
      Since $alphaGamma(alpha)=Gamma(1+alpha)$, it follows that
      $$Gamma(alpha)sim frac1{alpha}-gamma+mathcal{O}(alpha)$$
      where $gamma$ is the Euler constant.



      We may now combine the above two estimates to obtain
      $$alpha x^{alpha}B(alpha x,alpha)sim alpha x^{alpha}left(frac1{alpha}-gammaright)(alpha x)^{-alpha}=left(frac1{alpha}-gammaright)alpha^{1-alpha}geq1$$
      provided $alpha$ is small enough. For example, $0<alpha<frac12$ works.






      share|cite|improve this answer


























        4












        4








        4






        This is an attempt to strengthen your claim.



        If $x$ is large then $B(x,y)sim Gamma(y)x^{-y}$ and hence
        $$B(alpha x,alpha)sim Gamma(alpha)(alpha x)^{-alpha};$$
        where $Gamma(z)$ is the Euler Gamma function.



        On the other hand, for small $alpha$, we have the expansion
        $$Gamma(1+alpha)=1+alphaGamma'(1)+mathcal{O}(alpha^2).$$
        Since $alphaGamma(alpha)=Gamma(1+alpha)$, it follows that
        $$Gamma(alpha)sim frac1{alpha}-gamma+mathcal{O}(alpha)$$
        where $gamma$ is the Euler constant.



        We may now combine the above two estimates to obtain
        $$alpha x^{alpha}B(alpha x,alpha)sim alpha x^{alpha}left(frac1{alpha}-gammaright)(alpha x)^{-alpha}=left(frac1{alpha}-gammaright)alpha^{1-alpha}geq1$$
        provided $alpha$ is small enough. For example, $0<alpha<frac12$ works.






        share|cite|improve this answer














        This is an attempt to strengthen your claim.



        If $x$ is large then $B(x,y)sim Gamma(y)x^{-y}$ and hence
        $$B(alpha x,alpha)sim Gamma(alpha)(alpha x)^{-alpha};$$
        where $Gamma(z)$ is the Euler Gamma function.



        On the other hand, for small $alpha$, we have the expansion
        $$Gamma(1+alpha)=1+alphaGamma'(1)+mathcal{O}(alpha^2).$$
        Since $alphaGamma(alpha)=Gamma(1+alpha)$, it follows that
        $$Gamma(alpha)sim frac1{alpha}-gamma+mathcal{O}(alpha)$$
        where $gamma$ is the Euler constant.



        We may now combine the above two estimates to obtain
        $$alpha x^{alpha}B(alpha x,alpha)sim alpha x^{alpha}left(frac1{alpha}-gammaright)(alpha x)^{-alpha}=left(frac1{alpha}-gammaright)alpha^{1-alpha}geq1$$
        provided $alpha$ is small enough. For example, $0<alpha<frac12$ works.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 29 '18 at 22:15

























        answered Dec 29 '18 at 21:40









        T. Amdeberhan

        17.1k228126




        17.1k228126






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to MathOverflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f319725%2fsharp-inequality-for-beta-function%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Сан-Квентин

            8-я гвардейская общевойсковая армия

            Алькесар