Jump to content

A Bit about LPR on sites and more...


LPR settings  

1 member has voted

  1. 1. Do You set up Your LPR cameras, are they set on Your site ?

    • Yes :)
      0
    • NO :(
      1


Recommended Posts

LPR On projects….hym….. Could be better!!

During long work on sites in Europe, We got in touch with lot of different implementations of LPR technology. The sad truth is that, in most cases, the LPR is working worse than it could have, or takes up more resources than it should have.

The cameras are not configured - they are in out-of-box state with only Network config done or set as standard wide range view cameras.

I would like to share some tips on what to do to set LPR camera settings to maximize the camera potential and minimize false readings.

Set Your resolution, it won’t hurt!

Starting with resolution. Depending on detection range and the size of plates in pixels, it is not needed in 90% of situations to set it as 4k or 5MPx. Standard 1080p is plenty for it to work correctly in most use cases. Higher resolution You use = the more strain You put on Your resources and less cameras will You be able to use.
The key point to be able to keep your resolution work is to position the camera correctly. Correct optical zoom, vertical and horizontal angle will let You reduce Your resolution and FPS and save resources.

Sometimes less is more! How to get resources for more than 1 camera.

Next obvious thing is FPS, which in most out of box solutions will be max possible, so 25/30 fps. It is too much for 90% of use cases!
When working with slow moving cars at speed ranging from 0-50 kph the fps count can be set to between 5-15 fps.
In a lot of parking solutions I can see LPRs working on 25fps on sites that need the car to stop before crossing the barrier when it could work on 5-8fps with no decrease in performance.

For example:
image.png.44df72f54c288414489dda41f6b313a1.png

in a particular city project with cameras on crossroads, potential speed range is 0-80 kph and We use 15 fps and 1080p. average speed is +- 30kph but even in case of fast cars it will work. (Faster mean more false, not no reading at all)  The settings of Resolution and FPS Is based on camera positioning and its purpose, but if set correctly, the same Server can use much more of them.

For example, with classics out of box 1 camera (5MPx, 25fps) :
image.png.2f03f1dcb98bd99e995c93b1658940cb.png
we have:
image.png.2d51d546fd4163a64ac967336b514963.png

and for one set (1080p, 15fps) We have 4:
image.png.8edb6c449a329eb5b68295c0f3bb6c88.png
we get:
image.png.a9ef878be84ebbb77ede86718cab4d48.png

With this small example based on Our Platform Calculator You can see that its 1 out of box camera or 4 of them set for Your site.

If LP is not as sharp as knife, You cut Your recognition results.

Another important value is to set the correct picture settings. It's a tricki topic because it is highly dependent on the scene that We work with, but there are 2 important parts to remember: shutter speed and illumination.

In case of LPs shutter speed depends on objects speed. It will regulate the amount of light coming into the lens, helping You to get sharp License Plate frame stably.
I suggest setting it between 1/150 up to 1/500 based on the speed of the cars in view. The faster they are, the closer to 1/500 We should get.

Illumination - intensity of illumination can be set in most of modern cameras. We should set it while setting the camera and LP for night work. This one is to be set strictly based on the scene - there is no set value, but if You take a bit of time, You will get less over illuminated LPs which gives You more precise system.

Most cameras work mode depends on whatever We are using PAL or NTSC, It should be always set based on the one the country You implement in.

Is there more? Yes!

In this post I only added most basic things that We need to keep in mind, but there are far more factors in work. Configuration of scene and camera positioning to get the highest possible read of LPs is extra crucial, and We also need to check the settings in different weather conditions and at different time to set everything optimally. Settings have to be different if the camera is, for example, facing the sun most of the day, ot the other hand opposite positioning will have to be set accordingly.  

Optimizing camera settings is not easy and needs a bit of work, but it is incredibly rewarding and will let us use the cameras We have to the full potential.

If You want to know more!
 
image.thumb.png.d2a6b367d4f27ad6dc4c0dc325e8d528.png
You can ask Me under this post and I’ll try to help You as much as possible.

Edited by Piotr Walkowski
  • Like 3
  • Thanks 2
Link to comment
Share on other sites

  • 6 months later...

Hello Roderick, 

Most of the time I would suggest to use the different cameras for LPR and Facial Recognition. 
The important thing is that for LPR You use different settings, also You should prefer to set the LPR on lanes You want to see so You have the biggest possible  accuracy. 
And if You are looking into camera taking LPR and a person driving, :) then its difficult. During day without sun its ok, but any kind of reflection is making it impossible to get face. 

As for actual camera, all goes when using on server analytics. I would be focused on Optics and ease of use, There are some nice mini PTZ cameras that allow You to change the angle remotely. Super nice for City project where all changes to cameras need a crane to get to it.(Nice feature) 

Tell Me a bit more about the case and Ill have some more info into it :) 

Link to comment
Share on other sites

Hello Piotr, we are seeing a large uptick in interest in Smart Cities in Australia. The best way for Local Councils to reduce their costs while improving the service levels they provide to the community is automation and the convergence of several emerging technologies which includes Artificial Intelligence.  We are partnering with AxxonSoft to provide Video Management System (VMS) to process the incoming video streams from surveillance cameras spread throughout a precinct. They want to answer questions like; (1) How do people move from business to business throughout the night during a LIVE gig event? (2) Do people come individually, in pairs or move in groups? (3) Which suburbs are the people coming from? (4) Which type of LIVE gigs are better attended? (5) What's the impact of outdoor dining to tourism in the target precinct? (6) What type of business investment into a target precinct that provides the best return on their investment? (7) Based on the number of triggered alarms which is the 'safest' precincts to attend LIVE gigs in? (8) Finally, how do people travel to and from these events (a) Public Transport, (b) Private Cars, or (c) Active Transport (Walking or Cycling)? I am therefore using the following Axxon One features: Heat Map, Abandoned Object Detection, MomentQuest, Smoke & Fire Detection, Behaviour Analytics, Facial Recognition and Crowd Detection. Which non-PTZ camera would you recommend that we use for these applications?

Link to comment
Share on other sites

  • 3 weeks later...

(1) How do people move from business to business throughout the night during a LIVE gig event?

Explain the question a bit. Do You mean the object tracking and checking at what Live gig event object stops to observe ? 

(2) Do people come individually, in pairs or move in groups?

https://docs.axxonsoft.com/confluence/display/one20en/Functions+of+the+neurocounter
This article can interest You You can specify the count of objects to trigger the event. 
(Keep in mind that it can be GPU consuming to use it. Use calculator form web in needed.)

(3) Which suburbs are the people coming from?

This one can be based on LPR if they come via car. You can use the AxxonNet/Data dashboards to check the incoming data and visualize it for convenience.

(4) Which type of LIVE gigs are better attended?

Best would be to count the people entering the site, or heat map. 

(5) What's the impact of outdoor dining to tourism in the target precinct?

You can visualize the data got from system in AxxonNet/Data and based on it check the frequency, days that are most used for outside dining, etc.  

(6) What type of business investment into a target precinct that provides the best return on their investment?

As above its a data mining based on gathered data :) 

(7) Based on the number of triggered alarms which is the 'safest' precincts to attend LIVE gigs in?

If You're system is based on alarms for specific events or users are trained to trigger non analytics based alarm in case they spot something, You can pool the alarms count based on camera involved. thanks to it You can make a statistic for this data. Would need checking but Im pretty sure I can pull it and visualize in Dashboards. 

(8) Finally, how do people travel to and from these events (a) Public Transport, (b) Private Cars, or (c) Active Transport (Walking or Cycling)?

Hard to check without external data. We can check the (b) with LPR but for (a) You would need to pull data about tickets and compare. as for (c) We tested it a bit in City solution and its hard. To much data that is inconclusive. 

Which non-PTZ camera would you recommend that we use for these applications? 

Hard to say as My experience is more software based then hardware based. Honestly I would suggest 5MPX camera at tops, 8MPX still take to much bandwidth and space for archive to be efficient for large scale usage. 

Link to comment
Share on other sites

Hi Piotr,

Thank you for the responses. Here's my idea for Smart Cities:

Target Area: Popular multicultural dining street with many diff types of restaurants (Thai, Vietnamese, Chinese, Lebanese, Greek, Italian...etc) on both sides of the street. Local Council cordons off street block and allows businesses to spill onto the street = 24/7 Street Dining.

1) Strategically place cameras at entry and exit points and several cameras in amongst diners on the street.

2) Axxon One AI Analytics and Forensic Search - Can track an individual's movements from entry to exit. Which business did they eat at? How long did they stay? Facial expressions happy, angry or sad? Did she come with friends? How many? Did they visit a pub for drinks before dinner? We don't need to know the person's name, just their sex and age.

3) Axxon One Heat Map - Can show which businesses did well in footfall traffic over a specific time period. Which businesses did well that night in terms of diners? How many diners did they serve on average for the night?

4) Axxon One People Counting - Council puts on a heavy metal concert and 20,000 people rock up. The following night they put on a classical music concert and 100,000 people turn up. Council now knows which events do well for a particular target precinct and provide more of that type of entertainment.

5) Axxon One Skeletal Analytics - out of 100,000 patrons, 3 people have a heart attack and the Event security guards and Ambulance are immediately notified resulting in all three patrons being revived in time and surviving ordeal making the event a safer one for all ages.

6) Axxon One AI Analytics and Forensic Search - cameras can be extended to public transport areas. Can track an individual if they came by bus, train, light rail, Uber or taxi. Which buses were busiest during the heavy metal concert? Did the patrons come from the Western suburbs, the Eastern suburbs or the North Shore? Did they come individually or as a group? Which time period was the busiest time for public transport before and after a heavy metal concert and a classical music concert?

7) Axxon One Licence Plate Recognition, MomentQuest and Forensic Search - can track individuals/groups that came by car. Which suburbs are the cars registered in? Could they have taken public transport and still chose to drive into the precinct?

8) Axxon One AI - can be trained to calculate a target precinct's Vibrancy Score which can then help property developers know which areas are best to invest in for greater returns. Decision making based on real-time data.

Would the above scenarios work? Is Axxon One capable of delivering on the above assumptions?

 

Edited by roderick aguilar
added more info
Link to comment
Share on other sites

Hi Piotr, I am already working with Mike and Viacheslav. I just wanted to share on the community forum just in case someone has already successfully deployed a Smart City project who could give me an insight or two. Which module of Axxon One to use for a specific scenario and which camera to use and why. Thank you for your information to date. It's been helpful.

Link to comment
Share on other sites

Join the conversation

If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...