Release 0.6.0.0 – Jun 2020 (20.6) – GBG IDscan Documentation

Release 0.6.0.0 – Jun 2020 (20.6)

Release 0.6.0.0 

  • Updated several libraries that Liveness depends on. Most significantly OpenCV and Boost have been updated. This is needed to address security, buildability and maintenance issues along with compatibility with IDES. 

Release 0.5.5.0 

  • We now adapt the heuristic jump checker thresholds based on the speed of the device Liveness is running on. 
  • Added a new callback that can be used to provide a lot more details on the Liveness journey as it progresses. This can be used to produce a more intuitive user interface.  
  • Fixed a bug that caused Liveness to start up with the wrong default parameters for Liveness journeys.  
  • Added the ability to find out more details about the reason for a Liveness journey failure. This can then be displayed in the onboarding interface.  
  • Consolidation of server side and mobile release code. 

Release 0.4.9.1 

  • Added anti-expressions. A smile expression will not pass if the user is also frowning. A frown expression with not pass if the user is smiling. We also force a smile and frown into every journey of 2 or more actions. A single image will no longer be able to pass the liveness expression actions. 
  • Added the ability to pass frown with a sad face. The US interpretation of frown is downturned mouth while elsewhere we think lowered glowering eyebrows. Both can now be used to pass frown. 
  • Allow the minimum pass frames to be configured, so that we can balance ease of spoofing with ability to pass on slower devices.  
  • Align default jump angles. We detect jumps in face position so that a straight on static image cannot be substituted with a left/right/up/down static image and still pass. The default jump angles are now slightly less than the default angle thresholds. (This jump checker can be turned off by setting the jump angles to large values). 
  • Detect jumps when the face tracking is lost but the subjects face is within straight on angle tolerances before and after the tracking loss. It is common to lose tracking if a subject turns too far – this extra jump checker only fires if the user is not turning but we still lose tracking. The idea is to catch swapping over static expression images. (Both jump checkers can be turned off by setting the number of allowed jumps to -1). 
  • Do not auto-adjust frame rate during times when the face tracking is lost. Re-establishing face tracking can take a long time, so in this situation we do not auto-tune pass frames and frame rate based on actual processing times.  

Release 0.4.8.3 

  • Added auto-tuning of frame rate and pass frames based on actual processing power limited frame rate achieved. Time to pass an action should be approximately 1 second on all devices as a result (if at least 8 frames per second achieved). 
  • Reduced static image attack susceptibility. Require 0.75 degree delta changes in pose angle for first three frames i.e. the subject must move this amount for at least 3 frames of video to pass a pose action. The subject can then hold a position above the angle thresholds until pass frames has been reached or keep moving to pass. (Default threshold angles are 15 degrees to the side and 10 degrees up and down). 
  • Made the minimum number of pass frames 8 even on devices that are too slow to support this frames rate in order to prevent spoofing with flat images. 
  • Reduced the number of system clock calls to improve efficiency on Android devices. 
  • When sending multiple face match images ensure that they are sent throughout the liveness process. Make it less likely we will not have enough un-blurred straight on images to send the requested number of images.  
  • Complier/tool chain updates to support current Android versions. 

Release 0.4.4.0 – Last VSTS production version – moved to Jenkins for later releases.  

Was this page helpful?