Build a motion activated security camera, with WebRTC, canvas and Device Orientation

As a web developer, you’ve probably seen emerging HTML5 technologies and APIs like DeviceOrientation and WebRTC (Web Real Time Communications), and thought “wow they look cool, but they are only for hard core gaming, video conferencing, and other such stuff, not for my every day development”. I’m firmly convinced that taking advantage of these capabilities is going to open up fantastic potential for developers, both for existing web sites, as well as entirely new web experiences. In this article, I want to talk about the latter.

When we first moved into the Web Directions office, we had an old iMac (I mean old) set up as a motion activated security camera. One of the guys who used to share the office with us had built a very simple app that when it detected movement (I’m assuming by analysing images) it sent a photo to a specified email address. Sadly, the Mac and app went when the guy moved out. I say sadly, because a few months back we could really have done with this to help catch whoever came by one night at 3am, smashed in our door, and took several devices.

But then it occurred to me this is something we can build in the browser. All we’d need to do was

  1. Detect motion (with the DeviceMotion API (though it’s a bit more complex than this in practice as we’ll see in a moment)
  2. Capture an image using WebRTC and the HTML5 canvas
  3. Send the image via email (we won’t cover that today, as it is really more a server side issue, but there’s all kinds of ways you could do it) to ourselves.

So, let’s get started. We’ll begin by detecting motion.

Detecting motion

You’re probably thinking, there’s an HTML API for this, DeviceMotion. Which is exactly what I thought. The problem is, while well supported in mobile and tablet browsers (these devices almost universally have gyroscopes for detecting their orientation in 3D space, and accelerometers for detecting their acceleration in 3D as well) it’s not supported in any desktop browser. But, there is a related API, DeviceOrientation which reports the angle at which the device is in 3 dimensions, and which is supported in Chrome, when the laptop it is running on has the sensors to provide this data (I know that the MacBook Pro, but not Air support DeviceOrientation). DeviceMotion and DeviceOrientation work similarly. They both are events sent to the window object when something changes about the device. We can provide event listeners for these events, then respond to the data they provide.

Let’s create event handlers for each of these kinds of event

if (window.DeviceMotionEvent) {
  window.addEventListener('devicemotion', motionHandler, false)

else if (window.DeviceOrientationEvent) {
  window.addEventListener('deviceorientation', orientationHandler, false)

For each type of event, we make sure that the window object supports the event type, and if it does we add an event listener to the window for the type of event.

Ok, so now our Window can receive these events, let’s look at what information we get from each event, and how we can detect whether the device is in motion.

As mentioned, the most logical way to do so is via DeviceMotion, but here’s the complication. An ideal device for using as a security camera is an old laptop. It’s powered, so the battery won’t go flat, and on tablets, only Chrome for Android supports getUserMedia, for operating the device’s video camera. But, we can use DeviceOrientation to detect motion as we saw on some laptops in Chrome. Let’s do that first, then quickly look at how we can do the same thing for devices which support DeviceMotion events.

Here’s our handler for DeviceOrientation events.

function orientationHandler (orientationData){
  var today = new Date();

  if((today.getTime() - lastMotionEvent) > motionInterval){	
    lastMotionEvent = today.getTime()

and similarly, our handler for DeviceMotion events

motionHandler: function (motionData){
  var today = new Date();

  if((today.getTime() - lastMotionEvent) > motionInterval){	
    lastMotionEvent = today.getTime()

Because DeviceMotion and DeviceOrientation events fire many many times a second, if we were to respond to every single such event, we’d have a very warm laptop, and on battery powered devices, much shorter battery life. So, here we check the current time, and only if the time since we last responded to this event is greater than some interval we respond to the event. Checking for movement a few times every second should be more than adequate.

The event listeners receive deviceOrientation events, with data about the event, including information about the device’s orientation around 3 axes—alpha, beta and gamma.

  • alpha is the device’s rotation around the z axis, an imaginary line extending out vertically from the middle of the device when it is lying flat on its back. In theory, alpha=0 is facing east, 90 is facing south, 180 is facing west, and 270 is facing north, but due to practical reasons, alpha is really only accurate for relative motions, not absolute directions, and so for example can’t be used to create a compass.
  • beta measures the rotation around the x axis, a line horizontally through the device from left to right. 0 is when the device is flat, positive values are the number of degrees that the device is tilted forward, and negative values, the number of degrees it’s tilted backwards
  • gamma measures the device’s rotation around the y axis, a line horizontally along the plane of the devices keyboard (or screen). Positive values at the number of degrees it’s tilted to the right, and negative values, the number of degrees it’s tilted to the left
the device orientation axes
Device Orientation axes, laptop image ©umurgdk

Responding to the event

So, here’s how we’ll respond to the the event, and determine whether the device has moved.

function checkMotionUsingOrientation(orientationData){
  //detect motion using change in orientation
  var threshold = .7; //sensitivity, the lower the more sensitive
  var inMotion = false;
  var betaChange = orientationData.beta - lastBeta //change in beta since last orientation event
  var gammaChange = orientationData.gamma - lastGamma //change in gamma since last orientation event
  inMotion = (Math.abs(orientationData.beta - lastBeta) >= threshold ) || (Math.abs(orientationData.gamma - lastGamma) >= threshold)
  //if the change is greater than the threshold in either beta or gamma, we've moved 

  if (inMotion) {
    //do something because it is in motion
  lastBeta = orientationData.beta;
  lastGamma = orientationData.gamma;
  //now we remember the most recent beta and gamma readings for comparing the next time

The orientationData argument is our deviceOrientation event. Along with the sorts of information we’d expect from any event, it has 3 properties, alpha, beta and gamma, with no prizes for guessing what these contain.

What our function does is gets the beta and gamma values from the event, and subtracts the difference from the last time we measured these. If either of these differs by more than some threshold we’ve set (in this case a little under 1 degree) then we’ve detected a movement. We finish by storing the most recent beta and gamma values. We’ve not bothered with alpha values, because Chrome, at present the only browser to report these values on the desktop, doesn’t report alpha values, and because moving a device only around one axis is extremely difficult, so if there’s movement around beta or gamma, then that’s good enough for our purposes. Essentially when the device is lying flat on its back, anyone walking in the vicinity will trigger this event.

How about doing the same thing when device motion events are supported? This time, instead of reporting the devices orientation in space, we get information about its acceleration in each of the same axes, x, y and z.

  • motionData.acceleration.x is the acceleration of the device, in metres per second per second (ms^2), to the right (relative to the device) (so negative values are acceleration to the left)
  • motionData.acceleration.y is the acceleration of the device, in metres per second per second (ms^2), forward (relative to the device) (negative values are acceleration “backwards”)
  • motionData.acceleration.z is the acceleration of the device, in metres per second per second (ms^2), upwards (relative to the device) (negative values are downwards)

Here’s how we’d use this to detect motion.

checkMotionUsingMotion: function(motionData){
  //agorithm courtesy

  var threshold = 0.2;
  var inMotion = false;
  var acX = motionData.acceleration.x;
  var acY = motionData.acceleration.y;
  var acZ = motionData.acceleration.z;
  if (Math.abs(acX) > threshold) {
    inMotion = true
  if (Math.abs(acY) > threshold) {
    inMotion = true
  if (Math.abs(acZ) > threshold) {
      inMotion = true

  if (inMotion) {
    //do something because it is in motion


Here we take the acceleration in each axis, and if any of these is greater than a threshold amount (to ensure we don’t get false positives) then we’re in motion. You can see it’s a little simpler than using deviceOrientation, as we don’t need to calculateany change.

Taking the photo

So now we can detect when the device is moving, we want our security camera to take a photo. How are we going to do this? Well, one feature of WebRTC is the ability to capture video with a device’s video camera. At present, this is supported in Firefox and Chrome on the desktop, and the Blackberry 10 Browser (which also supports devicemotion events, so your Blackberry 10 phone or Playbook can serve as a security camera if you need it!), as well as Chrome for Android (though you need to enable it with chrome://flags). WebRTC is a very powerful API, but we’re only going to need a small part of it.

We’ll use the getUserMedia method of the navigator object. This takes an options object, as well as a success and a failure callback function as its arguments.

var options = {video: true};
navigator.getMedia(options, gotVideoStream, getStreamFailed);

Our options variable is a simple object, here we just set its property video to true (if we wanted audio we’d also set an audio property to true).

We’ve also passed it two callback functions, gotVideoStream, which will be called once a video stream is available, and getStreamFailed, which is called if we don’t get a video stream (for example, if the user refuses the browser’s request to use the video camera). getUserMedia uses callbacks, rather than returning a value, because it takes time for the user to choose whether to allow video to be enabled, and as JavaScript is single threaded, this would block our UI while the user waited.

Next, let’s use video stream.

function gotVideoStream(stream) {
  var videoElement = document.querySelector("video");
  videoElement.src = window.URL.createObjectURL(stream);

OK, there’s a bit going on here, so let’s take it one step at a time. Navigator calls our callback function, passing an argument stream. This is a MediaStream object. We then use the createObjectURL method of the window‘s URL object to get a URL for the stream (this way we can then make this URL the value of the src attribute of a video element, then this video element will show the output of our camera in real time!).

So, we’ve now got a working video camera, that shows the video feed from our devices camera in a web page. No servers, no plugins! But we still don’t quite have our security camera. What we need to do is take a snapshot from the video stream, when we detect movement. So, let’s first take the snapshot

Taking a snapshot from the video element

Here we’ll take a snapshot of the video element at a given time. Note this works regardless of what’s playing in the video element (so you can do a screen grab of anything playing in an HTML5 video element like this). Ready?

function takeSnapshot(){
	var canvas = document.querySelector("canvas");
  var context = canvas.getContext('2d');
  var video = document.querySelector("video");
  context.drawImage(video, 0, 0);

Here’s what we’re doing

  • we get a canvas element from the page
  • we get its 2D drawing context
  • we get the video element from the page
  • we use the drawImage method of the canvas to draw the video into the canvas starting at (0, 0) (the top left of the canvas).

Yes, it really is that easy. Just as you can use canvas.drawImage with an img element, we can use it with a video element.

Now we’ve got all the pieces, let’s put them together to create our security camera.

Remember this part of our motion detection functions?

if (inMotion) {
  //do something because it is in motion

This is where we call takeSnapshot, and then the current frame in the video element will be captured to a canvas element. You could also save this in localStorage, or send it via email to someone, or otherwise do something with the image. I’ll leave those parts to you.

And that’s really all there is to it.

I’ve also got a fully working version available on github. It’s a little more complicated to read through than the code here, but it’s copiously commented, and the basic working code is the same. Or you can see it in action here (just make sure you use Chrome with a device that supports orientation events, and has a webcam).

Notes for those following along

Note though, to make it work from your local drive, you’ll need to run it through a webserver (Chrome won’t enable the camera from file:// although Firefox will). You’ll also need a device that supports either device orientation or device motion events, which to my knowledge currently means only a MacBook Pro (not MacBook Air).

Links for further reading

Som more reading on the various features we used to build our security camera.

27 responses to “Build a motion activated security camera, with WebRTC, canvas and Device Orientation”:

  1. […] Build a motion activated security camera, with WebRTC, canvas and Device Orientation | Web Direction… – I just built a motion activated security camera using only a browser. Here's how […]

  2. Good read. I think there’s a bug in your checkMotionUsingOrientation() function. Shouldn’t the motionDetector.lastBeta and motionDetector.lastGamma only be updated if inMotion == true? Otherwise, the person could infinitely move under the threshold, and the callback would never be fired. Here’s an updated function, sans comments:

    checkMotionUsingOrientation: function(orientationData){
    var threshold = .7,
    betaChange = Math.abs(orientationData.beta - motionDetector.lastBeta),
    gammaChange = Math.abs(orientationData.gamma - motionDetector.lastGamma);
    if((betaChange >= threshold ) || (gammaChange >= threshold)) {
    if(motionDetector.callback) motionDetector.callback();
    motionDetector.lastBeta = orientationData.beta;
    motionDetector.lastGamma = orientationData.gamma;

    And is there a reason the threshold is different in checkMotionUsingOrientation() vs checkMotionUsingMotion(). If they should be the same, might want to make that a property of the motionDetector JSON object.

  3. Follow up from my last comment, assuming “threshold” becomes a property of the motionDetector JSON object, here are updated functions for checkMotionUsingOrientation() and checkMotionUsingMotion():

    checkMotionUsingOrientation: function(orientationData){
    var betaChange = Math.abs(orientationData.beta - motionDetector.lastBeta),
    gammaChange = Math.abs(orientationData.gamma - motionDetector.lastGamma);
    if((betaChange >= motionDetector.threshold ) || (gammaChange >= motionDetector.threshold)) {
    if(motionDetector.callback) motionDetector.callback();
    motionDetector.lastBeta = orientationData.beta;
    motionDetector.lastGamma = orientationData.gamma;
    checkMotionUsingMotion: function(motionData){
    var acX = Math.abs(motionData.acceleration.x),
    acY = Math.abs(motionData.acceleration.y),
    acZ = Math.abs(motionData.acceleration.z);
    if((acX > motionDetector.threshold) || (acY > motionDetector.threshold) || (acZ > motionDetector.threshold)){
    if(motionDetector.callback) motionDetector.callback();

  4. […] Direct Link to Article — Permalink […]

  5. This looks really interesting, i haven’t seen anything like this before but im definitely going to look into it more.

    I knew HTML5 was more advanced but i had no idea it could do motion activated security camera lol. Thanks.

    • By: Aeip
    • June 10th, 2013

    Where are the photos saved?

  6. […] requestAnimationFrame and Build a motion activated security camera, with WebRTC, canvas and Device Orientation (Chris […]

  7. […] John Allsopp shows how he built a browser-based security application using the  DeviceMotion API, WebRTC and canvas. Build a motion activated security camera, with WebRTC, canvas and Device Orientation […]

    • By: Alex
    • June 11th, 2013

    Where and how are the images saved? Can I view the live stream from another computer at home?

    • By: John
    • June 11th, 2013


    thanks for the comments!

    At present, the images aren’t saved anywhere – but you could easily save them to localStorage. And with a bit more effort, send them to a server, or email them

    You could also use WebRTC to livestream to any other computer that is running a browser that supports WebRTC quite readily – just look for tutorials on webRTC.


    good catch, though I don’t think in practice it’s much of an issue – any motion that triggered the algorithm is quite fast and probably quite a bit larger than the threshold – 200ms is a long time to move a tiny fraction of a degree


  8. Wow! That’s cool. I want to go deep.

    • By: Scott B
    • June 20th, 2013

    I don’t think your battery-saving efforts are doing anything. The browser is still going to fire the motion/orientation events, and execute your handler on every change, even if the handler doesn’t do much.

    It’d be much more effective to have the handler remove itself from the event chain and set a timer that fires after the motionInterval has elapsed that re-attaches it. With nothing listening for the event, the browser should be able to power off the accelerometer/gyroscope and even sleep the processor, if the interval is long enough.

  9. I know what im going to have a play with at the weekend now, this could proove to be hours of fun at work, capturing every body slacking!

    • By: John
    • June 28th, 2013

    Thanks Scott,

    I’ll look more into how to reduce the impact of the event listening on battery life.


  10. […] covered DeviceMotion and DeviceOrientation in some detail recently when I built a motion activated security camera in the browser, so we’ll not go into the details of DeviceOrientation here. In short […]

    • By: Radhakrishna
    • July 8th, 2013

    Great blog, I was thinking of a scenario, when motion have to be split, or measured in some units. First of all, is this possible? Thanks

  11. […] Want to get started with webRTC, you might be interested in our recent article, where we built a motion activated security camera in the browser with webRTC. […]

  12. hello there and thank you for your info – I’ve definitely picked up anything new from right here. I did however expertise some technical points using this web site, as I experienced to reload the web site lots of times previous to I could get it to load correctly. I had been wondering if your web hosting is OK? Not that I’m complaining, but sluggish loading instances
    times will sometimes affect your placement in google and could damage your high-quality score if advertising and
    marketing with Adwords. Anyway I am adding this RSS to my email and could look out for much
    more of your respective intriguing content. Make sure you update this again

  13. Thanks a lot for sharing this with all people you actually
    understand what you are talking about! Bookmarked.
    Kindly also seek advice from my site =). We could have a link alternate contract among us

  14. Hello, i think that i saw you visited my website thus i came to “return the favor”.I am attempting to find
    things to enhance my website!I suppose its ok to use some of your ideas!!

  15. It’d be much more effective to have the handler remove itself from the event chain and set a timer that fires after the motionInterval has elapsed that re-​​attaches it. With nothing listening for the event, the browser should be able to power off the accelerometer/​gyroscope and even sleep the processor, if the interval is long enough.

  16. Yesterday, while I was at work, my sister stole my iphone and tested to see if it can survive a forty foot drop,
    just so she can be a youtube sensation. My iPad is now destroyed and she has 83 views.
    I know this is totally off topic but I had to share it with someone!

  17. It’s an awesome paragraph in support of all the web
    visitors; they will obtain advantage from it I am sure.

  18. Hi, Neat post. There is an issue along with your site in
    internet explorer, might check this? IE still is the market chief and a large element of people will leave out your great writing due to this problem.

  19. I don’t know if it’s just me or if perhaps everybody else experiencing issues with your site.
    It looks like some of the text within your content are running off the screen.
    Can someone else please provide feedback and let me know if this is happening to them
    too? This could be a problem with my web browser because I’ve had this happen previously.

    Appreciate it

    • By: Fabian
    • December 19th, 2013

    I loved as much as you’ll receive carried out right here.
    The sketch is tasteful, your authored subject matter stylish.

    nonetheless, you command get bought an edginess over that
    you wish be delivering the following. unwell unquestionably come more formerly
    again as exactly the same nearly very often inside case you shield this increase.

    • By: michael
    • May 12th, 2014

    a few of the remarks look as if they are
    coming from brain dead visitors?

    spam posting bots

    so yes they really are brainless

    they are a pain everywhere