Stream PiCamera Image Array from one Raspberry Pi to another











up vote
0
down vote

favorite
1












I am building a home surveillance system using Raspberry Pi and OpenCv.
Basically, my set up will consist of two devices, the first will be the security camera which will be a raspberry pi zero and a pi camera. The other device will be a main hub (Raspberry Pi 3) which will do all of the heavy lifting such as facial recognition, speech recognition and other operations.



what I want to do is to stream the footage of the security camera to the main hub so that it can process the images. So essentially I want to capture the frame from the pi camera, convert it to a numpy array (if that isn't done by default) and send that data to the main hub to then be converted to back to an image frame to be analysed by Opencv.



I am separating the operations as so since my security camera operates on a raspberry pi zero which is not very fast and cant handle heavy lifting. It is also because my security camera is hooked up to a battery and I am trying to lower the Pi's usage hence why I am dedicating a main hub for the heavy operations.



I am using a python v3 environment on both devices. I am well aware of IoT communication technologies such as mqtt, TCP and so on. But, I would like help with actually implementing such technologies in python script in order to accomplish my needs.










share|improve this question









New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
    – Mark Setchell
    Nov 17 at 14:36












  • Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
    – Noor Sabbagh
    Nov 17 at 23:31

















up vote
0
down vote

favorite
1












I am building a home surveillance system using Raspberry Pi and OpenCv.
Basically, my set up will consist of two devices, the first will be the security camera which will be a raspberry pi zero and a pi camera. The other device will be a main hub (Raspberry Pi 3) which will do all of the heavy lifting such as facial recognition, speech recognition and other operations.



what I want to do is to stream the footage of the security camera to the main hub so that it can process the images. So essentially I want to capture the frame from the pi camera, convert it to a numpy array (if that isn't done by default) and send that data to the main hub to then be converted to back to an image frame to be analysed by Opencv.



I am separating the operations as so since my security camera operates on a raspberry pi zero which is not very fast and cant handle heavy lifting. It is also because my security camera is hooked up to a battery and I am trying to lower the Pi's usage hence why I am dedicating a main hub for the heavy operations.



I am using a python v3 environment on both devices. I am well aware of IoT communication technologies such as mqtt, TCP and so on. But, I would like help with actually implementing such technologies in python script in order to accomplish my needs.










share|improve this question









New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
    – Mark Setchell
    Nov 17 at 14:36












  • Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
    – Noor Sabbagh
    Nov 17 at 23:31















up vote
0
down vote

favorite
1









up vote
0
down vote

favorite
1






1





I am building a home surveillance system using Raspberry Pi and OpenCv.
Basically, my set up will consist of two devices, the first will be the security camera which will be a raspberry pi zero and a pi camera. The other device will be a main hub (Raspberry Pi 3) which will do all of the heavy lifting such as facial recognition, speech recognition and other operations.



what I want to do is to stream the footage of the security camera to the main hub so that it can process the images. So essentially I want to capture the frame from the pi camera, convert it to a numpy array (if that isn't done by default) and send that data to the main hub to then be converted to back to an image frame to be analysed by Opencv.



I am separating the operations as so since my security camera operates on a raspberry pi zero which is not very fast and cant handle heavy lifting. It is also because my security camera is hooked up to a battery and I am trying to lower the Pi's usage hence why I am dedicating a main hub for the heavy operations.



I am using a python v3 environment on both devices. I am well aware of IoT communication technologies such as mqtt, TCP and so on. But, I would like help with actually implementing such technologies in python script in order to accomplish my needs.










share|improve this question









New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I am building a home surveillance system using Raspberry Pi and OpenCv.
Basically, my set up will consist of two devices, the first will be the security camera which will be a raspberry pi zero and a pi camera. The other device will be a main hub (Raspberry Pi 3) which will do all of the heavy lifting such as facial recognition, speech recognition and other operations.



what I want to do is to stream the footage of the security camera to the main hub so that it can process the images. So essentially I want to capture the frame from the pi camera, convert it to a numpy array (if that isn't done by default) and send that data to the main hub to then be converted to back to an image frame to be analysed by Opencv.



I am separating the operations as so since my security camera operates on a raspberry pi zero which is not very fast and cant handle heavy lifting. It is also because my security camera is hooked up to a battery and I am trying to lower the Pi's usage hence why I am dedicating a main hub for the heavy operations.



I am using a python v3 environment on both devices. I am well aware of IoT communication technologies such as mqtt, TCP and so on. But, I would like help with actually implementing such technologies in python script in order to accomplish my needs.







python opencv raspberry-pi raspberry-pi3






share|improve this question









New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Nov 18 at 7:24





















New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Nov 17 at 10:28









Noor Sabbagh

12




12




New contributor




Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Noor Sabbagh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
    – Mark Setchell
    Nov 17 at 14:36












  • Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
    – Noor Sabbagh
    Nov 17 at 23:31




















  • Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
    – Mark Setchell
    Nov 17 at 14:36












  • Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
    – Noor Sabbagh
    Nov 17 at 23:31


















Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
– Mark Setchell
Nov 17 at 14:36






Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering.
– Mark Setchell
Nov 17 at 14:36














Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
– Noor Sabbagh
Nov 17 at 23:31






Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working.
– Noor Sabbagh
Nov 17 at 23:31














1 Answer
1






active

oldest

votes

















up vote
0
down vote













I think it will be better to break down your task.
1. Capture image stream from pi0 and stream it.
2. Take the stream from pi1 and Process it in pi3



Sample code to get you started with image capture Which you can find here :



import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read



Live Video Streaming Python Flask



Then use this URL to process the video in the pi3
Sample code from here:



import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
# print "File Cannot be Opened"

while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
#print cap.isOpened(), ret
if frame is not None:
# Display the resulting frame
cv2.imshow('frame',frame)
# use other methods for object face or motion detection
# OpenCV Haarcascade face detection
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print "Frame is None"
break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"


This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here






share|improve this answer























  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
    – Noor Sabbagh
    Nov 17 at 23:40











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Noor Sabbagh is a new contributor. Be nice, and check out our Code of Conduct.










 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53350333%2fstream-picamera-image-array-from-one-raspberry-pi-to-another%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













I think it will be better to break down your task.
1. Capture image stream from pi0 and stream it.
2. Take the stream from pi1 and Process it in pi3



Sample code to get you started with image capture Which you can find here :



import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read



Live Video Streaming Python Flask



Then use this URL to process the video in the pi3
Sample code from here:



import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
# print "File Cannot be Opened"

while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
#print cap.isOpened(), ret
if frame is not None:
# Display the resulting frame
cv2.imshow('frame',frame)
# use other methods for object face or motion detection
# OpenCV Haarcascade face detection
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print "Frame is None"
break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"


This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here






share|improve this answer























  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
    – Noor Sabbagh
    Nov 17 at 23:40















up vote
0
down vote













I think it will be better to break down your task.
1. Capture image stream from pi0 and stream it.
2. Take the stream from pi1 and Process it in pi3



Sample code to get you started with image capture Which you can find here :



import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read



Live Video Streaming Python Flask



Then use this URL to process the video in the pi3
Sample code from here:



import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
# print "File Cannot be Opened"

while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
#print cap.isOpened(), ret
if frame is not None:
# Display the resulting frame
cv2.imshow('frame',frame)
# use other methods for object face or motion detection
# OpenCV Haarcascade face detection
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print "Frame is None"
break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"


This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here






share|improve this answer























  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
    – Noor Sabbagh
    Nov 17 at 23:40













up vote
0
down vote










up vote
0
down vote









I think it will be better to break down your task.
1. Capture image stream from pi0 and stream it.
2. Take the stream from pi1 and Process it in pi3



Sample code to get you started with image capture Which you can find here :



import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read



Live Video Streaming Python Flask



Then use this URL to process the video in the pi3
Sample code from here:



import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
# print "File Cannot be Opened"

while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
#print cap.isOpened(), ret
if frame is not None:
# Display the resulting frame
cv2.imshow('frame',frame)
# use other methods for object face or motion detection
# OpenCV Haarcascade face detection
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print "Frame is None"
break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"


This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here






share|improve this answer














I think it will be better to break down your task.
1. Capture image stream from pi0 and stream it.
2. Take the stream from pi1 and Process it in pi3



Sample code to get you started with image capture Which you can find here :



import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()


You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read



Live Video Streaming Python Flask



Then use this URL to process the video in the pi3
Sample code from here:



import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
# print "File Cannot be Opened"

while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
#print cap.isOpened(), ret
if frame is not None:
# Display the resulting frame
cv2.imshow('frame',frame)
# use other methods for object face or motion detection
# OpenCV Haarcascade face detection
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print "Frame is None"
break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"


This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 17 at 16:41

























answered Nov 17 at 16:29









amran hossen

325




325












  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
    – Noor Sabbagh
    Nov 17 at 23:40


















  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
    – Noor Sabbagh
    Nov 17 at 23:40
















Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
– Noor Sabbagh
Nov 17 at 23:40




Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays.
– Noor Sabbagh
Nov 17 at 23:40










Noor Sabbagh is a new contributor. Be nice, and check out our Code of Conduct.










 

draft saved


draft discarded


















Noor Sabbagh is a new contributor. Be nice, and check out our Code of Conduct.













Noor Sabbagh is a new contributor. Be nice, and check out our Code of Conduct.












Noor Sabbagh is a new contributor. Be nice, and check out our Code of Conduct.















 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53350333%2fstream-picamera-image-array-from-one-raspberry-pi-to-another%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

If I really need a card on my start hand, how many mulligans make sense? [duplicate]

Alcedinidae

Can an atomic nucleus contain both particles and antiparticles? [duplicate]