Camera / Video


How to start working with us.

Geolance is a marketplace for remote freelancers who are looking for freelance work from clients around the world.


Create an account.

Simply sign up on our website and get started finding the perfect project or posting your own request!


Fill in the forms with information about you.

Let us know what type of professional you're looking for, your budget, deadline, and any other requirements you may have!


Choose a professional or post your own request.

Browse through our online directory of professionals and find someone who matches your needs perfectly, or post your own request if you don't see anything that fits!

Secure context: Feature is limited to secure environments (HTTPS) in certain supported browsers. Media devise. GetUserMedia() methods ask users for permission for a media output and produce media streams containing the requested media types. This stream may contain for example a film (produced using either a hardware or virtual video system such as a camera or audio recording device), or a recording audio track ( a similar recording of a computer audio track It returns Promises resolved in media streams.


The camera interface provides an easy way to capture video, audio, and photographs. It exposes multiple devices that can be used to take pictures, including built-in cameras or external webcams. The audio recording device interface returns the microphone input signal as an output stream of audio samples. You can use these APIs for example in a website offering users the option to record their voice by capturing sound via microphone input. They are primarily designed for applications where media playback is not possible (for example games).

Since getUserMedia() is typically implemented using platform-specific code it may not be available everywhere on all browsers or devices. See Implementation Status You can get the dimensions of the source dimensions (the preview) with media devices. getSupportedConstraints (). width and media device. getSupportedConstraints (). height To query the properties of the camera, use the MediaDevices.getCamera() method instead.

The user can select which type of input or output is provided by an application using the constraints parameter (see Constraints). The choices include front-facing cameras, back-facing cameras, both back, and front-facing cameras, connected microphones (for recording audio), or external devices such as TV sets. You can use these APIs for example in a game that uses video to communicate with players via their webcams. Note: This feature does not yet provide access to all available types of hardware this way; see Interfaces supported by your browser below. Video conferencing solution is like screen sharing with the help video conferencing platforms. Video meetings and video calls are two video conferencing features which will help you to find best video conferencing solutions on any video conferencing service.

Do you want to make your website more interactive?

Our camera and audio recording APIs can help you do just that. With these APIs, you can easily add video and audio recording capabilities to your website. This will allow your users to capture photos and videos of themselves, as well as record their voices. Video recording can be helpful in video conferences like those of microsoft teams.

Our APIs are easy to use and provide great results. You won’t have to worry about complicated setup procedures or coding requirements. Simply include our libraries on your page, and you’re good to go.

Privacy and security

These functions are untrusted web applications, so are subject to the usual security restrictions imposed by the Same Origin Policy. It is critical that you not use these features for anything sensitive.

How we tested

This collection of APIs is complex, and it's been challenging to test all cases at the same time. This article summarizes our approach for testing each function listed in the table below. We have included a summary of any security considerations that we've run across in our testing. Note: It remains possible that other security-sensitive behaviors not discussed may arise when using these functions. We plan to include tests for them as we encounter them or discover their behavior through usage.

Security Considerations

See Security considerations In WebRTC, where getUserMedia() is used with RTCPeerConnection, constraints are applied to both local streams and remote streams. When constraints are applied only locally, the returned stream will have no external references and can only be used locally. When constraints are applied to both local and remote streams, multiple peer-to-peer connections will be established for the output stream, one per media track.


Specify constraints that select which hardware devices can produce or consume media data (see Constraints). You can also apply constraints to input and output streams at the same time (see First cut of getUserMedia() API ). To do so, provide your constraints in an object containing each constraint parameter set as follows: var options = { audio : true , video : { facingMode : 'user' } }; var promise = navigator . mediaDevices . getUserMedia (options)

See Device selection using MediaStreamConstraints.

First cut of getUserMedia() API

The getUserMedia() method on the Navigator object takes a getUserMedia() options object, returns a Promise, and calls your success callback with one parameter: an instance of MediaStream. The argument passed to the callback will be either a MediaStream object or null. If there are no media available, then the argument will be set to null. This first version does not provide access to devices other than camera and microphone inputs in this release. In later versions we expect more device types to become exposed by this API. Note: We have implemented the restrictions in several ways for different browsers and platforms, so don't assume that these are consistent across all current implementations yet! See Interfaces supported by your browser below.

Interface support

The getUserMedia() interface is supported in all browsers that support the MediaDevices device selection (see Device Selection) and RTCPeerConnection APIs (for voice chat). Video support is limited to recent mobile phone, desktop computer, or Xbox platforms at this time; see Interfaces supported by your browser.

Supported media types

We currently only test... "video" We provide examples of testing constraints for each major browser implementation when... "Constraints" BUG=299577 , 587663 NOTRY=true See Bug 299577: Side effects of getUserMedia() without constraints.

Interfaces supported by your browser WebRTC

WebRTC getUserMedia() is supported in Chrome, Firefox, and Opera 15+ . See WebRTC getUserMedia(). MediaDevices

The MediaStream constructor is supported in all browsers that support the navigator.getUserMedia() interface. See MediaStream. RTCPeerConnection

RTCPeerConnection with data channels is supported in Chrome. Since it's not currently implemented for voice chat use cases, we have focused on the video case here. See Peer-to-peer connection using RTCPeerConnection without signaling server.

Security considerations

We are not aware of any security issues specific to this collection of functions other than those common to client-side JavaScript generally. Note: As we add more advanced constraints, the security issues associated with them should be considered. For example, if we were to expose hardware device identifiers (e.g., serial numbers), we should consider what security issues would arise from exposing such information to web content.

Get the Camera/Video

The getUserMedia() function in MediaDevices returns a promise. The success callback takes in an instance of MediaStream, which represents the stream from the camera(s) and/or microphone(s).


We have two callbacks for this API: the first is onSuccess, and the second is onError. If anything goes wrong (such as invalid parameters), you will receive an error (e.g., InvalidStateError ). If either no cameras are available or no user has given consent to use their webcam, then we expect null to be passed into your onSuccess callback. This returned object contains all the information that you need about what device and capabilities were tested and selected. The returned MediaStream object will contain all the information you need to render your stream(s) on the page (see WebRTC).

Success callback

When getUserMedia() succeeds, it will pass in an instance of MediaStream into your onSuccess callback. This is an ordinary MediaStream with capabilities and settings that match what was selected for the user's environment. It also contains a device, which uniquely identifies this capture device within this browser session. That identifier might change when another getUserMedia() call switches between capture devices or when another browser session is started or navigated to. For example, if two callbacks are created for equest_1, then later navigations between tabs would cause two streams to be selected: equest_1 and then equest_2. This would be important to note if you're using the webcam as a push-to-talk device because it might dynamically change between callbacks without your knowledge! Note: While multiple calls to getUserMedia() will not return duplicates of each other's selected devices, repeated requests for permission can cause one or more devices to be returned multiple times in the same session.

Why you should trust us?

We are not aware of any specific issues or concerns with this collection of functions other than those common to client-side JavaScript generally. We spot-check that the returned MediaStreams have the expected width and height, but they could be resized after being passed into your callback. It's also possible that a matching device has no video stream, which is why we always pass back null in our onSuccess() callback if no camera was found.

Warning: As noted earlier under Security considerations, there are some security issues associated with exposing hardware identifiers (such as serial numbers) to web content. One approach would be for getUserMedia() to return only devices with identifiers suitable for use by third parties rather than exposing actual hardware identifiers (such as serial numbers).

What can you do with my Camera/Microphone?

The user's agent tracks which media capture devices are in use for each origin.

This means that when a webpage is rendered in a tab, the same set of capture devices are available to all iframes within the page. This simplifies viewing pages in multiple windows or opening them in new tabs. Note The user's agent tracks which media capture devices are in use for each origin, but not across origins. If two sites requesting access to the camera happen to be loaded within different origins, then there is no way for an application-level script running on one site to know what device is being used by another site even though it's sharing the same session. Also, there is no way for an video conferencing app-level script to know when another site in the same session has gained access to one of its devices.

Depth stream info

If you request a depth camera, you'll get several bits of information about it: maxDepth, closestDepth, and normalizedDist. If your requested device doesn't have a depth stream, then this will be null or undefined. The maximum value from all pixels across all frames is used as the distance metric. The minimum testable difference between any two samples is considered 1 unit in this number space. In other words, if two samples are closer than minPixelDepth / kMaxPixelDifference, they would both return the same value.

Normalized Depth Values NormalizedDist is a floating-point, normalized value of the distance between this pixel and the nearest pixel in any depth stream. The minPixelDepth/kMaxPixelDifference limits on this value are strictly enforced across all pixels; no two samples will ever get normalizedDist values that are too close to each other (i.e., less than minPixelDepth or greater than kMaxPixelDifference).

Closest Depth If your device does not have a depth stream, closestDepth is null or undefined. Otherwise, it's an object containing these fields: lowValue - Minimum range known to be safe for foreground objects highValue - Maximum range known to be safe for foreground objects extLowValue - Lowest possible range if attached hardware supports it extHighValue - Highest possible range if attached hardware supports it

Availability of Depth Streams While some cameras do not have depth information available at all, the availability is generally advertised via the video stream's onloadeddata event (see MediaDevices.ondatadevicesuccess). If the device does not support a depth stream it will return null on getCapabilities().

What's the use case that the working group had in mind?

This enables a new class of applications that can handle arbitrary visual media, including Depth effects for artistic purposes: An application could let the user draw on and "interact with" static images and video. Interactive education materials: Best video conferencing software could allow users to interactively explore 3D models from various angles.

Spatial augmented reality apps: These run as desktop browsers now but would benefit greatly from full hardware tracking + acceleration. Especially those using multiple screens/depth cameras to build hybrid AR/VR experiences. Imagine a web-based CAD viewer where you can walk around your design at actual size, annotate directly onto it, etc... Multi-user shared virtual spaces: Web-based MMO games with 3D scenes, where each player has a "window into the world" they can see and share. In these cases, it would be important that different users have no way to either interfere or learn about the presence of other users.

Media Exploration apps: Developers want the ability to build immersive experiences around photos and videos. Users should be able to walk around a space, look behind objects, etc... These video conferencing apps need accurate tracking in space with wide FOV cameras getting the feed from multiple devices at once for this purpose.

What security requirements are there?

Nothing changes on what you can access or how permissions work here compared to WebRTC's use of getUserMedia(). It's still done through a navigator.mediaDevices request that asks for audio or video, and if allowed it's routed to the RTCPeerConnection.

Depth cameras work just like other media types except for one detail... Depth images are not real-time. The data they provide is static - a picture/mesh representing the scene you see at some useful resolution. There is no reason to think of these as "live" things that affect what you see on screen, so the camera parameter passed to createMediaStreamSource() only includes the deviceId - there is no need to specify a particular stream name here since whatever you get will be made available via all streams with non-null id's (see MediaDevices.on data change).

Technology to implement

The proposed API is a very thin wrapper around the current getUserMedia() interface for video. It's mostly just adding depthframe as an extra objectType you can request from your media stream. In return, you get an additional "depth" attribute on the MediaStreamTrack objects that have been passed to createOffer(). This attribute is a new kind of track called DepthDataTrack that carries a promise-like object called a DepthDataFrame .

Depth Data Track A DepthDataTrack wraps a single DepthDataFrame and exposes its properties as read-only named attributes: maxDepth - The maximum range for this pixel deviceId - The unique ID of the camera used to take this photo timeStamp - A DOMHighResTimeStamp representing the time when the pixel was sampled. The API does not specify how this is generated, but you can probably assume it tracks frameTime in some way for your use case.

DepthDataFrame A DepthDataFrame object has five read-only properties that expose its contents to user code: left - The position of this pixel in physical space right - The corresponding value on the right side up - The orientation in which the camera was held at capture time (using gravity as a Z-axis) front - How far away things are in front of the camera based on its optical characteristics behind - Same idea for objects behind you Many webcams do not support all these values, so implementers should ensure they report min/max ranges via getDepth() and depthMax to avoid hiding content or triggering defects by telling developers things are closer than they are. DepthDataFrames may include additional values (and cameras may provide even more data), but right now the proposal only allows for this basic set.

Samples implementations

A system like this requires shared standards, so there's no "native" implementation that you can use without also supporting WebRTC / getUserMedia(). Mozilla is working with members of the Khronos Group (the industry standard group responsible for OpenGL) on a new API called OpenXR which should provide all these capabilities in an upcoming specification.

The currently proposed getUserMedia() extension does not include support for any depth-based media beyond what right now exists in WebRTC. However, the standard already allows for extensions to add new media types by specifying a MIME type in an offer/answer exchange.

Browser Compatibility

In practice, this will be something that Chrome implements ahead of other browsers, but Firefox has been working on adding support for depth-based media in getUserMedia() since late last year and it should come online soon. To ensure that you can take video from one browser and upload it to another without running up against bugs while still allowing content adaptation where appropriate, we should specify what subset of common capabilities all implementations must provide. Unfortunately, there's no easy way forward here - we cannot write a spec with requirements like "100% coverage" because what "coverage" means is entirely implementation-specific. In the worst case, people will have to write code that runs in multiple engines just to do basic things like scaling video conferencing from one device to another.

Geolance is an on-demand staffing platform

We're a new kind of staffing platform that simplifies the process for professionals to find work. No more tedious job boards, we've done all the hard work for you.

Geolance is a search engine that combines the power of machine learning with human input to make finding information easier.

© Copyright 2022 Geolance. All rights reserved.