The probability of reaching the absorbing states from a particular transient state?Plotting absorbing state...
How vim overwrites readonly mode?
How do you funnel food off a cutting board?
Translation needed for 130 years old church document
What senses are available to a corpse subjected to a Speak with Dead spell?
What's the oldest plausible frozen specimen for a Jurassic Park style story-line?
Can we "borrow" our answers to populate our own websites?
Calculate the true diameter of stars from photographic plate
What is the difference between "...", '...', $'...', and $"..." quotes?
How do you voice extended chords?
Why is Agricola named as such?
Can the "Friends" spell be used without making the target hostile?
Am I correct in stating that the study of topology is purely theoretical?
Why did the villain in the first Men in Black movie care about Earth's Cockroaches?
Does it take energy to move something in a circle?
Citing paid articles from illegal web sharing
Could an Apollo mission be possible if Moon would be Earth like?
What happens when the wearer of a Shield of Missile Attraction is behind total cover?
Cat is tipping over bed-side lamps during the night
Is there a lava-breathing lizard creature (that could be worshipped by a cult) in 5e?
Eww, those bytes are gross
Does Skippy chunky peanut butter contain trans fat?
What is a good reason for every spaceship to carry a weapon on board?
How do I prevent a homebrew Grappling Hook feature from trivializing Tomb of Annihilation?
What game did these black and yellow dice come from?
The probability of reaching the absorbing states from a particular transient state?
Plotting absorbing state probabilities from state 1Defining a function of a distributionNicely illustrating the evolution and end-state of a discrete-time Markov chainHow to obtain the number of Markov Chain transitions in a simulation?Arranging “ranked” nodes of a graph symmetricallyState “i” goes to state “j”: list accessible states in a Markov-chainPart specification error with InterpolatingFunction when generating a Markov Modulated Poisson ProcessUsing Mathematica to calculate expected time to absorptionHidden Markov Model: emissions probabilities dependent on observable parameterEstimate process parameters of geometric Brownian motion with a two-state Markov chainPlotting absorbing state probabilities from state 1
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
add a comment |
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago
add a comment |
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
markov-chains markov-process
edited 2 hours ago
user120911
asked 2 hours ago
user120911user120911
67228
67228
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago
add a comment |
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192241%2fthe-probability-of-reaching-the-absorbing-states-from-a-particular-transient-sta%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
edited 48 mins ago
answered 1 hour ago
kglrkglr
186k10202421
186k10202421
add a comment |
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192241%2fthe-probability-of-reaching-the-absorbing-states-from-a-particular-transient-sta%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
1 hour ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
1 hour ago