Picard's method does not solve first order differential equation?
I have the following first order differential equation
$$x^prime(t)=-(x(t))^2+2x(t),quad tgeq 0,quad x(0)=1$$
Now I want to obtain an approximation of $x(t)$ by using Picard's method. Then the first Picard's iterates of this problem are given by:
$$x_1 = x_0 + int_0^t F(s,x_0(s))ds=1+int_0^t -(1)^2+2ds=t+1 $$
$$x_2 = 1+ int_0^t-(s+1)^2+2(s+1)ds=-frac{1}{3}t^3+t+1 $$
$$x_3 = 1+ int_0^t -(-frac{1}{3}s^3+s+1)^2+2(-frac{1}{3}s^3+s+1)ds=-frac{1}{63}t^7 + frac{2}{15}t^5 + -frac{1}{3}t^3 + t+1 $$
This should converge to the true solution of $x(t)$ but when I derive the solution analytically i get the following:
$$x^prime(t)=-(x(t))^2+2x(t)Longleftrightarrow frac{x^prime(t)}{-(x(t))^2+2x(t)}=1$$
Using the fact that:
$$frac{1}{x(2-x)}=frac{1}{2}left(frac{1}{x}+frac{1}{2-x}right) $$
differentiating both sides then yields:
$$int_0^tfrac{x^prime(s)}{x(s)}ds + int_0^t frac{x^prime(s)}{2-x}ds = int_0^t2ds $$
$$Longleftrightarrow ln|x(t)| -ln|2-x(t)|=2t+c_1,quad c_1inmathbb{R} $$
$$Longleftrightarrow frac{x(t)}{2-x(t)} = c_2e^{2t}, quad c_2 = pm e^{c_1}$$
$$Longleftrightarrow x(t) = frac{2e^{2t}}{c_3 + e^{2t}}, quad c_3 = frac{1}{c_2}$$
Combining this with the fact that $x(0)=1$ yields that we have $x(t)$ given by:
$$x(t)=frac{2e^{2t}}{1+e^{2t}}$$
When I plot the functions obtained by the Picard method I get a totally different result then when I plot the above given function of $x(t)$. Why is this? Shouldn't the Picard method converge to its true solution? I also used more Iterations but both methods keep giving a different answer. Does anyone know why this is or am I doing something wrong?
differential-equations fixed-point-theorems contraction-operator picard-scheme
add a comment |
I have the following first order differential equation
$$x^prime(t)=-(x(t))^2+2x(t),quad tgeq 0,quad x(0)=1$$
Now I want to obtain an approximation of $x(t)$ by using Picard's method. Then the first Picard's iterates of this problem are given by:
$$x_1 = x_0 + int_0^t F(s,x_0(s))ds=1+int_0^t -(1)^2+2ds=t+1 $$
$$x_2 = 1+ int_0^t-(s+1)^2+2(s+1)ds=-frac{1}{3}t^3+t+1 $$
$$x_3 = 1+ int_0^t -(-frac{1}{3}s^3+s+1)^2+2(-frac{1}{3}s^3+s+1)ds=-frac{1}{63}t^7 + frac{2}{15}t^5 + -frac{1}{3}t^3 + t+1 $$
This should converge to the true solution of $x(t)$ but when I derive the solution analytically i get the following:
$$x^prime(t)=-(x(t))^2+2x(t)Longleftrightarrow frac{x^prime(t)}{-(x(t))^2+2x(t)}=1$$
Using the fact that:
$$frac{1}{x(2-x)}=frac{1}{2}left(frac{1}{x}+frac{1}{2-x}right) $$
differentiating both sides then yields:
$$int_0^tfrac{x^prime(s)}{x(s)}ds + int_0^t frac{x^prime(s)}{2-x}ds = int_0^t2ds $$
$$Longleftrightarrow ln|x(t)| -ln|2-x(t)|=2t+c_1,quad c_1inmathbb{R} $$
$$Longleftrightarrow frac{x(t)}{2-x(t)} = c_2e^{2t}, quad c_2 = pm e^{c_1}$$
$$Longleftrightarrow x(t) = frac{2e^{2t}}{c_3 + e^{2t}}, quad c_3 = frac{1}{c_2}$$
Combining this with the fact that $x(0)=1$ yields that we have $x(t)$ given by:
$$x(t)=frac{2e^{2t}}{1+e^{2t}}$$
When I plot the functions obtained by the Picard method I get a totally different result then when I plot the above given function of $x(t)$. Why is this? Shouldn't the Picard method converge to its true solution? I also used more Iterations but both methods keep giving a different answer. Does anyone know why this is or am I doing something wrong?
differential-equations fixed-point-theorems contraction-operator picard-scheme
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28
add a comment |
I have the following first order differential equation
$$x^prime(t)=-(x(t))^2+2x(t),quad tgeq 0,quad x(0)=1$$
Now I want to obtain an approximation of $x(t)$ by using Picard's method. Then the first Picard's iterates of this problem are given by:
$$x_1 = x_0 + int_0^t F(s,x_0(s))ds=1+int_0^t -(1)^2+2ds=t+1 $$
$$x_2 = 1+ int_0^t-(s+1)^2+2(s+1)ds=-frac{1}{3}t^3+t+1 $$
$$x_3 = 1+ int_0^t -(-frac{1}{3}s^3+s+1)^2+2(-frac{1}{3}s^3+s+1)ds=-frac{1}{63}t^7 + frac{2}{15}t^5 + -frac{1}{3}t^3 + t+1 $$
This should converge to the true solution of $x(t)$ but when I derive the solution analytically i get the following:
$$x^prime(t)=-(x(t))^2+2x(t)Longleftrightarrow frac{x^prime(t)}{-(x(t))^2+2x(t)}=1$$
Using the fact that:
$$frac{1}{x(2-x)}=frac{1}{2}left(frac{1}{x}+frac{1}{2-x}right) $$
differentiating both sides then yields:
$$int_0^tfrac{x^prime(s)}{x(s)}ds + int_0^t frac{x^prime(s)}{2-x}ds = int_0^t2ds $$
$$Longleftrightarrow ln|x(t)| -ln|2-x(t)|=2t+c_1,quad c_1inmathbb{R} $$
$$Longleftrightarrow frac{x(t)}{2-x(t)} = c_2e^{2t}, quad c_2 = pm e^{c_1}$$
$$Longleftrightarrow x(t) = frac{2e^{2t}}{c_3 + e^{2t}}, quad c_3 = frac{1}{c_2}$$
Combining this with the fact that $x(0)=1$ yields that we have $x(t)$ given by:
$$x(t)=frac{2e^{2t}}{1+e^{2t}}$$
When I plot the functions obtained by the Picard method I get a totally different result then when I plot the above given function of $x(t)$. Why is this? Shouldn't the Picard method converge to its true solution? I also used more Iterations but both methods keep giving a different answer. Does anyone know why this is or am I doing something wrong?
differential-equations fixed-point-theorems contraction-operator picard-scheme
I have the following first order differential equation
$$x^prime(t)=-(x(t))^2+2x(t),quad tgeq 0,quad x(0)=1$$
Now I want to obtain an approximation of $x(t)$ by using Picard's method. Then the first Picard's iterates of this problem are given by:
$$x_1 = x_0 + int_0^t F(s,x_0(s))ds=1+int_0^t -(1)^2+2ds=t+1 $$
$$x_2 = 1+ int_0^t-(s+1)^2+2(s+1)ds=-frac{1}{3}t^3+t+1 $$
$$x_3 = 1+ int_0^t -(-frac{1}{3}s^3+s+1)^2+2(-frac{1}{3}s^3+s+1)ds=-frac{1}{63}t^7 + frac{2}{15}t^5 + -frac{1}{3}t^3 + t+1 $$
This should converge to the true solution of $x(t)$ but when I derive the solution analytically i get the following:
$$x^prime(t)=-(x(t))^2+2x(t)Longleftrightarrow frac{x^prime(t)}{-(x(t))^2+2x(t)}=1$$
Using the fact that:
$$frac{1}{x(2-x)}=frac{1}{2}left(frac{1}{x}+frac{1}{2-x}right) $$
differentiating both sides then yields:
$$int_0^tfrac{x^prime(s)}{x(s)}ds + int_0^t frac{x^prime(s)}{2-x}ds = int_0^t2ds $$
$$Longleftrightarrow ln|x(t)| -ln|2-x(t)|=2t+c_1,quad c_1inmathbb{R} $$
$$Longleftrightarrow frac{x(t)}{2-x(t)} = c_2e^{2t}, quad c_2 = pm e^{c_1}$$
$$Longleftrightarrow x(t) = frac{2e^{2t}}{c_3 + e^{2t}}, quad c_3 = frac{1}{c_2}$$
Combining this with the fact that $x(0)=1$ yields that we have $x(t)$ given by:
$$x(t)=frac{2e^{2t}}{1+e^{2t}}$$
When I plot the functions obtained by the Picard method I get a totally different result then when I plot the above given function of $x(t)$. Why is this? Shouldn't the Picard method converge to its true solution? I also used more Iterations but both methods keep giving a different answer. Does anyone know why this is or am I doing something wrong?
differential-equations fixed-point-theorems contraction-operator picard-scheme
differential-equations fixed-point-theorems contraction-operator picard-scheme
edited Dec 8 at 12:53
Mauro ALLEGRANZA
64.1k448111
64.1k448111
asked Dec 8 at 12:51
Wim Verboom
225
225
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28
add a comment |
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28
add a comment |
2 Answers
2
active
oldest
votes
You should have gotten a plot like this
which gives a visible idea of convergence for $|t|le 0.5$.
Let's consider the interval $|x-1|<1$ in state space in the proof construction of the Picard theorem. Then for the Picard iteration operator $P(x)(t)=x_0+int_0^tf(s,x(s))ds$ we get here
$$|P(x)(t)-1|le int_0^t|x(s)^2-2x(s)|dsle int_0^t(1+|x(s)-1|^2)dsle 2t.$$
To stay inside the chosen bound one needs $|t|le bar tlefrac 12$.
Next, the Lipschitz constant is the maximum of $|-2x+2|=2|x-1|$ over the given region, which gives $L=2$. For convergence in the Picard-Lindelöf proof, that is, contractivity of the Picard map, one needs also $Lbar t<1$, which is given for any $bar t<frac12$, for example for $bar t=frac13$.
This quantitative reasoning confirms that you get reasonably fast visible convergence only for $|t|lefrac12$ or smaller intervals.
Let's explore obstacles for the convergence of the Picard iteration from the point of view of the result, that is, the convergence of the resulting power series. While the denominator $1+e^{2t}$ does not have real roots, the radius of convergence of the power series that you compute is determined also by the complex roots of the denominator which are poles of the function itself.
$$
e^{2t}=-1iff 2t=i(2k+1)pi
$$
has the smallest solutions at $t=pm ifracpi2$, which gives a more optimistic radius of convergence for the power series expansion, and thus possibly also the Picard iteration, of $fracpi2approx1.5$.
But convergence is much slower the closer you get to the boundary of the region of convergence. Outside of the region of convergence it is to be expected that the partial sums of the power series wildly diverge.
add a comment |
I don't know how you think both results do not agree, as you don't give more details. I calculated the degree 7 Taylor polynomial of $f(t)=frac{2e^{2t}}{1+e^{2t}}$, and it is
$$
p(t)=1+t-frac{t^3}3+frac{2t^5}{15}-frac{17t^7}{315}.
$$
So the 7th term does not agree, with the Picard polynomial, but no one said it has to. In any case, the plots look very similar to me.
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3031077%2fpicards-method-does-not-solve-first-order-differential-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You should have gotten a plot like this
which gives a visible idea of convergence for $|t|le 0.5$.
Let's consider the interval $|x-1|<1$ in state space in the proof construction of the Picard theorem. Then for the Picard iteration operator $P(x)(t)=x_0+int_0^tf(s,x(s))ds$ we get here
$$|P(x)(t)-1|le int_0^t|x(s)^2-2x(s)|dsle int_0^t(1+|x(s)-1|^2)dsle 2t.$$
To stay inside the chosen bound one needs $|t|le bar tlefrac 12$.
Next, the Lipschitz constant is the maximum of $|-2x+2|=2|x-1|$ over the given region, which gives $L=2$. For convergence in the Picard-Lindelöf proof, that is, contractivity of the Picard map, one needs also $Lbar t<1$, which is given for any $bar t<frac12$, for example for $bar t=frac13$.
This quantitative reasoning confirms that you get reasonably fast visible convergence only for $|t|lefrac12$ or smaller intervals.
Let's explore obstacles for the convergence of the Picard iteration from the point of view of the result, that is, the convergence of the resulting power series. While the denominator $1+e^{2t}$ does not have real roots, the radius of convergence of the power series that you compute is determined also by the complex roots of the denominator which are poles of the function itself.
$$
e^{2t}=-1iff 2t=i(2k+1)pi
$$
has the smallest solutions at $t=pm ifracpi2$, which gives a more optimistic radius of convergence for the power series expansion, and thus possibly also the Picard iteration, of $fracpi2approx1.5$.
But convergence is much slower the closer you get to the boundary of the region of convergence. Outside of the region of convergence it is to be expected that the partial sums of the power series wildly diverge.
add a comment |
You should have gotten a plot like this
which gives a visible idea of convergence for $|t|le 0.5$.
Let's consider the interval $|x-1|<1$ in state space in the proof construction of the Picard theorem. Then for the Picard iteration operator $P(x)(t)=x_0+int_0^tf(s,x(s))ds$ we get here
$$|P(x)(t)-1|le int_0^t|x(s)^2-2x(s)|dsle int_0^t(1+|x(s)-1|^2)dsle 2t.$$
To stay inside the chosen bound one needs $|t|le bar tlefrac 12$.
Next, the Lipschitz constant is the maximum of $|-2x+2|=2|x-1|$ over the given region, which gives $L=2$. For convergence in the Picard-Lindelöf proof, that is, contractivity of the Picard map, one needs also $Lbar t<1$, which is given for any $bar t<frac12$, for example for $bar t=frac13$.
This quantitative reasoning confirms that you get reasonably fast visible convergence only for $|t|lefrac12$ or smaller intervals.
Let's explore obstacles for the convergence of the Picard iteration from the point of view of the result, that is, the convergence of the resulting power series. While the denominator $1+e^{2t}$ does not have real roots, the radius of convergence of the power series that you compute is determined also by the complex roots of the denominator which are poles of the function itself.
$$
e^{2t}=-1iff 2t=i(2k+1)pi
$$
has the smallest solutions at $t=pm ifracpi2$, which gives a more optimistic radius of convergence for the power series expansion, and thus possibly also the Picard iteration, of $fracpi2approx1.5$.
But convergence is much slower the closer you get to the boundary of the region of convergence. Outside of the region of convergence it is to be expected that the partial sums of the power series wildly diverge.
add a comment |
You should have gotten a plot like this
which gives a visible idea of convergence for $|t|le 0.5$.
Let's consider the interval $|x-1|<1$ in state space in the proof construction of the Picard theorem. Then for the Picard iteration operator $P(x)(t)=x_0+int_0^tf(s,x(s))ds$ we get here
$$|P(x)(t)-1|le int_0^t|x(s)^2-2x(s)|dsle int_0^t(1+|x(s)-1|^2)dsle 2t.$$
To stay inside the chosen bound one needs $|t|le bar tlefrac 12$.
Next, the Lipschitz constant is the maximum of $|-2x+2|=2|x-1|$ over the given region, which gives $L=2$. For convergence in the Picard-Lindelöf proof, that is, contractivity of the Picard map, one needs also $Lbar t<1$, which is given for any $bar t<frac12$, for example for $bar t=frac13$.
This quantitative reasoning confirms that you get reasonably fast visible convergence only for $|t|lefrac12$ or smaller intervals.
Let's explore obstacles for the convergence of the Picard iteration from the point of view of the result, that is, the convergence of the resulting power series. While the denominator $1+e^{2t}$ does not have real roots, the radius of convergence of the power series that you compute is determined also by the complex roots of the denominator which are poles of the function itself.
$$
e^{2t}=-1iff 2t=i(2k+1)pi
$$
has the smallest solutions at $t=pm ifracpi2$, which gives a more optimistic radius of convergence for the power series expansion, and thus possibly also the Picard iteration, of $fracpi2approx1.5$.
But convergence is much slower the closer you get to the boundary of the region of convergence. Outside of the region of convergence it is to be expected that the partial sums of the power series wildly diverge.
You should have gotten a plot like this
which gives a visible idea of convergence for $|t|le 0.5$.
Let's consider the interval $|x-1|<1$ in state space in the proof construction of the Picard theorem. Then for the Picard iteration operator $P(x)(t)=x_0+int_0^tf(s,x(s))ds$ we get here
$$|P(x)(t)-1|le int_0^t|x(s)^2-2x(s)|dsle int_0^t(1+|x(s)-1|^2)dsle 2t.$$
To stay inside the chosen bound one needs $|t|le bar tlefrac 12$.
Next, the Lipschitz constant is the maximum of $|-2x+2|=2|x-1|$ over the given region, which gives $L=2$. For convergence in the Picard-Lindelöf proof, that is, contractivity of the Picard map, one needs also $Lbar t<1$, which is given for any $bar t<frac12$, for example for $bar t=frac13$.
This quantitative reasoning confirms that you get reasonably fast visible convergence only for $|t|lefrac12$ or smaller intervals.
Let's explore obstacles for the convergence of the Picard iteration from the point of view of the result, that is, the convergence of the resulting power series. While the denominator $1+e^{2t}$ does not have real roots, the radius of convergence of the power series that you compute is determined also by the complex roots of the denominator which are poles of the function itself.
$$
e^{2t}=-1iff 2t=i(2k+1)pi
$$
has the smallest solutions at $t=pm ifracpi2$, which gives a more optimistic radius of convergence for the power series expansion, and thus possibly also the Picard iteration, of $fracpi2approx1.5$.
But convergence is much slower the closer you get to the boundary of the region of convergence. Outside of the region of convergence it is to be expected that the partial sums of the power series wildly diverge.
edited Dec 8 at 15:10
answered Dec 8 at 13:41
LutzL
55.8k42054
55.8k42054
add a comment |
add a comment |
I don't know how you think both results do not agree, as you don't give more details. I calculated the degree 7 Taylor polynomial of $f(t)=frac{2e^{2t}}{1+e^{2t}}$, and it is
$$
p(t)=1+t-frac{t^3}3+frac{2t^5}{15}-frac{17t^7}{315}.
$$
So the 7th term does not agree, with the Picard polynomial, but no one said it has to. In any case, the plots look very similar to me.
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
add a comment |
I don't know how you think both results do not agree, as you don't give more details. I calculated the degree 7 Taylor polynomial of $f(t)=frac{2e^{2t}}{1+e^{2t}}$, and it is
$$
p(t)=1+t-frac{t^3}3+frac{2t^5}{15}-frac{17t^7}{315}.
$$
So the 7th term does not agree, with the Picard polynomial, but no one said it has to. In any case, the plots look very similar to me.
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
add a comment |
I don't know how you think both results do not agree, as you don't give more details. I calculated the degree 7 Taylor polynomial of $f(t)=frac{2e^{2t}}{1+e^{2t}}$, and it is
$$
p(t)=1+t-frac{t^3}3+frac{2t^5}{15}-frac{17t^7}{315}.
$$
So the 7th term does not agree, with the Picard polynomial, but no one said it has to. In any case, the plots look very similar to me.
I don't know how you think both results do not agree, as you don't give more details. I calculated the degree 7 Taylor polynomial of $f(t)=frac{2e^{2t}}{1+e^{2t}}$, and it is
$$
p(t)=1+t-frac{t^3}3+frac{2t^5}{15}-frac{17t^7}{315}.
$$
So the 7th term does not agree, with the Picard polynomial, but no one said it has to. In any case, the plots look very similar to me.
answered Dec 8 at 13:22
Martin Argerami
123k1176174
123k1176174
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
add a comment |
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
I compared this plot1 with plot2 and it seemed very different. But thank you for you answer! I now see that indeed this converges to the true function of $x(t)
– Wim Verboom
Dec 8 at 13:37
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3031077%2fpicards-method-does-not-solve-first-order-differential-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You can solve your equation as Bernoulli equation setting $y=1/x$ so that then $y'=1-2y$ and thus $y=1/2+Ce^{-2t}$, with initial condition $y(0)=1$ you get $c=1/2$ and thus your solution with much less effort.
– LutzL
Dec 8 at 13:28