How has ‘Tap to Focus’ evolved over the years on Android?
The state of Android Camera: Tap to Focus
Tap to Focus
There’s no denying how satisfying it is when your camera magically knows what you’re pointing it at. But often it’s not quite right. So, we need the ability to focus the camera ourselves.
On Android, this is conventionally done using a tap. The user taps anywhere on the screen and the camera adjusts the lens to match.
Although the concept is simple, implementing this in your apps is rather tricky.
Camera1
When Android launched we had a relatively simple set of APIs. Implementing Tap to Focus went something like this.
Firstly, we have to add an onTouchListener
. We listen for down events and get the coordinates of the touch.
surfaceView.setOnTouchListener { _, motionEvent ->
val actionMasked = motionEvent.actionMasked
// Or just action
if (actionMasked != MotionEvent.ACTION_DOWN) {
return@setOnTouchListener false
}
val x = motionEvent.x
val y = motionEvent.y // More code shown below
}
Then we need to convert those coordinates into a Rect
that the camera will understand.
val touchRect = Rect(
(x - 100).toInt(),
(y - 100).toInt(),
(x + 100).toInt(),
(y + 100).toInt()
)
val targetFocusRect = Rect(
touchRect.left * 2000/surfaceView.width - 1000,
touchRect.top * 2000/surfaceView.height - 1000,
touchRect.right * 2000/surfaceView.width - 1000,
touchRect.bottom * 2000/surfaceView.height - 1000
)
Here we are taking the touch point and creating a rectangle around it. Then we map that rectangle into the camera’s coordinate space. See Android Docs for details.
Next, we cancel any previous focus requests.
// Cancel any previous requests
camera.cancelAutoFocus()
Then we make a focus request on the camera.
// Set the camera parameters
val focusList = listOf(
Camera.Area(targetFocusRect, 1000)
)
val parameters = camera.getParameters()
parameters.setFocusAreas(focusList)
parameters.setMeteringAreas(focusList)
camera.setParameters(parameters)// Request focus
camera.autoFocus { success, camera ->
// May want to schedule an cancelAutoFocus with some delay
// perform any other actions as required
}
Although there’s a good few lines of code, this implementation was fairly straightforward.
Camera2
Over the years, Android users have demanded better cameras, with more advanced features. To support this growth, a new set of APIs was needed. Android added the Camera2 APIs in Android 5 to deprecate the previous set. They expanded the range of supported features. Offering lots of flexibility to device makers.
Like Camera1, we start with an onTouchListener
.
surfaceView.setOnTouchListener { _, motionEvent ->
val actionMasked = motionEvent.actionMasked // Or action
if (actionMasked != MotionEvent.ACTION_DOWN) {
return@setOnTouchListener false
}
val x = motionEvent.x
val y = motionEvent.y // More code shown below
}
And again we need to convert that touch point into something the camera can understand. Namely, focus regions. With Camera2, this is a lot more complex.
// Get characteristics
val characteristics = cameraManager.getCameraCharacteristics(cameraDevice.id)
val activeArraySize = characteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)
val cameraFacing = characteristics.get(CameraCharacteristics.LENS_FACING)
val sensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)// Calculate rotationCompensation
val deviceRotation = windowManager.defaultDisplay.rotation
val rotationCompensation = ORIENTATIONS[deviceRotation]
val isFrontFacing = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val rotationCompensation = if (isFrontFacing) {
(sensorOrientation + rotationCompensation) % 360
} else { // back-facing
(sensorOrientation - rotationCompensation + 360) % 360
}// Define outputs
var outputX: Float
var outputY: Float
val outputWidth: Float
val outputHeight: Float// Set initial outputs
if (rotationCompensation == 90 || rotationCompensation == 270) {
// We're horizontal. Swap width/height. Swap x/y.
outputX = y
outputY = x
outputWidth = surfaceView.width.toFloat()
outputHeight = surfaceView.height.toFloat()
} else {
outputX = x
outputY = y
outputWidth = surfaceView.width.toFloat()
outputHeight = surfaceView.height.toFloat()
}// Map to correct coordinates according to relativeCameraOrientation when (rotationCompensation) {
90 ->
outputY = outputHeight - outputY
180 -> {
outputX = outputWidth - outputX
outputY = outputHeight - outputY
}
270 ->
outputX = outputWidth - outputX
}// Swap x if it's a mirrored preview
if (isFrontFacing) {
outputX = outputWidth - outputX
}// Normalized it to [0, 1]
outputX /= outputWidth
outputY /= outputHeight// Create MeteringRectangle
val focusAreaTouch = MeteringRectangle(
(outputX * activeArraySize.width()).toInt(),
(outputY * activeArraySize.height()).toInt(),
(activeArraySize.width() * TOUCH_AREA_MULTIPLIER).toInt(),
(activeArraySize.height() * TOUCH_AREA_MULTIPLIER).toInt(),
MeteringRectangle.METERING_WEIGHT_MAX
)val focusRegions = arrayOf(focusAreaTouch)
There’s a lot of code here, but in essence, we are mapping our touch point into Camera2’s coordinate space. But Camera2’s coordinate space is not as static, so we need to consider:
- device rotation,
- camera sensor rotation,
- camera facing, and;
- cameras sensor array size.
Next, we need to check that focusing is supported by the camera.
// Check AF supported
if (getCameraAvailableAfModes()?.contains(CameraMetadata.CONTROL_AF_MODE_AUTO) != true) {
return
}
val maxRegionsAf = characteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AF)
if (maxRegionsAf == null || maxRegionsAf < 1) return
Next, cancel any previous focus requests and any ongoing repeating requests. We do this with the stopRepeating
API and by making a capture
request.
// Stop repeating preview and cancel any existing focus requests
session.stopRepeating()
val cancelBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
for (surface in surfaces) {
cancelBuilder.addTarget(surface)
}
cancelBuilder.set( CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_CANCEL )
session.capture(cancelBuilder.build(), null, null)
Now we can make the focus capture
request on the camera, using the focusRegions. We create our request builder setting the various focus and region settings.
// Create new focus request
val focusBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
for (surface in surfaces) {
focusBuilder.addTarget(surface)
}
focusBuilder.set(CaptureRequest.CONTROL_AF_REGIONS, focusRegions)
val maxRegionsAe = characteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AE)
if (maxRegionsAe != null && maxRegionsAe > 0) {
focusBuilder.set(CaptureRequest.CONTROL_AE_REGIONS, focusRegions)
}
val maxRegionsAwb = characteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AWB)
if (maxRegionsAwb != null && maxRegionsAwb > 0) {
focusBuilder.set(CaptureRequest.CONTROL_AWB_REGIONS, focusRegions)
}
focusBuilder.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_AUTO)
focusBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START)
Then make the capture
request.
// Make focus request
val callback = object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted( session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult ) {
// TODO Restart your repeating request using the same focusRegions
} override fun onCaptureFailed( session: CameraCaptureSession, request: CaptureRequest, failure: CaptureFailure ) {
// TODO Restart your default repeating request
}
}
session.capture(focusBuilder.build(), callback, null)
// May want to schedule a cancel focus requests with some delay
For this request, we add a callback. We need to restart our repeating request after the focus completes.
As you can see, the Camera2 APIs are more complex. They need more code and more in-depth knowledge of the camera system. And I haven’t even included the various try-catch scenarios in the code above.
CameraX
Camera2 APIs serve the purpose of providing flexible and expandable camera support. But they are difficult to work with. Requiring lots of code for simple use cases.
To combat this issue, Android introduced an unbundled library called CameraX. CameraX is a set of APIs that internally call the Camera2 APIs. It aims to make common use cases simple to build. Whilst it still allows more complex interaction with Camera2.
As expected, we start with an onTouchListener
.
surfaceView.setOnTouchListener { _, motionEvent ->
val actionMasked = motionEvent.actionMasked // Or action
if (actionMasked != MotionEvent.ACTION_DOWN) {
return@setOnTouchListener false
}
val x = motionEvent.x
val y = motionEvent.y // More code shown below
}
And again we need to convert that touch point into something the camera can understand. This time though, CameraX is going to do this for us.
val factory = binding.preview.meteringPointFactory
val point = factory.createPoint(x, y)
val action = FocusMeteringAction.Builder(point, FocusMeteringAction.FLAG_AF)
.addPoint(point, FocusMeteringAction.FLAG_AE)
.addPoint(point, FocusMeteringAction.FLAG_AWB)
.build()
Now we make the request.
val future = camera.cameraControl.startFocusAndMetering(action)
future.addListener(
{
try {
val result = future.get()
Log.d(tag, "Focus Success: ${result.isFocusSuccessful}")
} catch (e: ExecutionException) {
Log.e(tag, "Focus failed", e)
} catch (e: InterruptedException) {
Log.e(tag, "Focus interrupted", e)
}
},
cameraExecutor
)
We don’t need to cancel the focus after some delay because CameraX also takes care of that.
CameraX simplifies the implementation for the developer. Making it easy to do the right thing.
Summary
With CameraX reaching stable, implementing your own camera functionality is finally becoming simpler. They have recognised the camera APIs difficult past and brought out a great suite of tools. I recommend giving CameraX a try with Google’s code lab.
Resources
- github.com/brightec/KBarcode
- www.brightec.co.uk/blog/camerax-introduction
- medium.com/android-news/camerax-an-introduction-b3d76c3820e6
- source.android.com/devices/camera/versioning
- developer.android.com/reference/android/hardware/Camera
- jayrambhia.com/blog/android-touchfocus
- stackoverflow.com/questions/17993751/whats-the-correct-way-to-implement-tap-to-focus-for-camera
- gist.github.com/royshil/8c760c2485257c85a11cafd958548482
- medium.com/androiddevelopers/whats-new-in-camerax-fb8568d6ddc
- 07687375219170485808.googlegroups.com/attach/b86ef247362e5/Touch-to-focus%20API%20Draft.pdf
- androidx/camera/core/DisplayOrientedMeteringPointFactory.java
- androidx/camera/core/MeteringPointFactory.java
- medium.com/androiddevelopers/whats-new-in-camerax-fb8568d6ddc
- developer.android.com/training/camerax
- developer.android.com/training/camerax/configuration#control-focus
Previous Post
Originally published at https://www.brightec.co.uk.