Smart transportation is a key objective in smart city initiatives. But how many projects have actually been deployed? This note examines some interesting case studies from Taiwan.
Needless to say, urbanization is in full swing. More residents are migrating from rural to urban areas. This then leads to certain municipal issues like crime and pollution.
Also, transportation has become a source of headache. Cars are on the increase, making it more difficult to park or drive on congested highways. In this regard, AI and IoT can address traffic-related issues and turn driving/bus rides into a greater experience.
In its report, Grand View Research points out: “The necessity for presenting real-time traffic information of different regions to passengers and drivers is one of the significant factors driving the demand for intelligent transportation systems across the world.” According to the report, the global intelligent transportation system market size was valued at US$26.58 billion in 2019. It is projected to register a compound annual growth rate of 5.8 percent from 2020 to 2027.
Indeed, this all sounds very good. But how many real use cases are there? In the Taipei Smart City Summit and Expo (SCSE) held March 23-26, the city is demonstrating many interesting projects. Below are some of them.
Dynamic traffic lights
Traffic bottlenecks are oftentimes caused by inefficiencies in static traffic signals. For example, a green light lasts for minutes even when there’s little vehicular or pedestrian traffic. This then causes traffic congestion to form on the intersecting roadway.
One answer to this is dynamic (or adaptive) traffic signal control. Under such scheme, each signal cycle is automatically adjusted to the traffic situation at the time.
Taipei City, in collaboration with International Integrated Systems (IISI), has developed and deployed dynamic traffic controls at various intersections. The system entails IP cameras that process video data on the edge. The resulting metadata is then transmitted to the backend, which controls the signals.
“Regular-hour and rush-hour traffic conditions are different. Signal cycles can be adjusted accordingly, as seen by the IP camera,” said Gw Chen, Planner for Intelligent Transportation Division at IISI.
“Also, a pedestrian crossing tends to have more people during daytime and fewer people during the night. Our dynamic traffic control system can adjust the length of green lights accordingly, ensuring smoother traffic,” Chen added.
The solution has already shown positive results. IISI data shows the average traffic idle time has reduced by 15 percent and carbon emissions by 1,114 tons at intersections where the signals are installed.
Autonomous buses, 5G and V2X
Needless to say, the concept of autonomous vehicles is gaining traction. In many cities, not just vehicles, but autonomous buses are being trialed. This includes Taipei City where a lane on Xinyi Road is dedicated to autonomous bus trial runs. The city is currently collaborating with telecom operator Far EasTone (FET) which provides the 5G network.
Why use 5G over 4G? The answer lies in low latency.
According to Internet of Vehicles protocols, autonomous vehicles must work within a latency timeframe of 100 milliseconds. This is only achievable through 5G. Under 4G, a back-and-forth communication cycle takes some 15 seconds.
“The autonomous bus needs to constantly interact with its surroundings. Sometimes the bus lane becomes a bit curved. Sometimes, something is in the front. The bus needs to respond in matters of milliseconds,” said Edward Cheng, Asisstant Manager for Enterprise and Carrier BU at FET.
“In this sense, only 5G can provide the low-latency communication that makes this possible,” he said.
Through this collaboration, Taipei autonomous bus trial has entered into its second phase. If all goes well, commercial operation can begin in the near future.
FET, meanwhile, is also planning V2X proof-of-concept in Taiwan. V2X refers to vehicle-to-everything, an umbrella that includes various sub-technologies.
V2I, for example, allows communications between vehicles and the infrastructure. This way data picked up by the roadside unit can be transmitted to the vehicle’s onboard unit. Drivers then can avoid congested roads or full parking lots. V2V, on the other hand, enables communications between vehicles and vehicles.
Other smart transportation solutions
Several exhibitors also showcased their latest transportation-related solutions at the show. LILIN for example demonstrated their solution for law enforcement. LILIN IP cameras can catch activities including: illegal parking, illegal U-turns, illegal left turns, etc. Their license plate recognition software can further help police with fine issuances. Projects are already deployed in several cities in Taiwan.
LILIN also has a fisheye solution. It can identify all vehicle types – cars, trucks, bikes – in a given area with no missing spots. The resulting data can be used by city administrators for traffic management and planning.
AXIS Q6215-LE PTZ Network Camera and AXIS P1378-LE Network Camera together with RoboticsCats AI-Cloud wildfire detection SaaS (Software-as-a-Service) automatically sent alerts with smoke images to the staff of Big Tree Animal Sanctuary and Adoption Center (Big Tree) if any potential risk of wildfire is detected near the center. The staff can take timely action to save the animals if necessary.
Big Tree Animal Sanctuary and Adoption Center (Big Tree) is located in Kam Tin, a remote rural area in Hong Kong. The Center operates one of the largest animal sanctuaries in the city with more than 180 dogs and cats living in the center. Their mission is to advocate animal rights by protecting abandoned animals with shelters and care. The center is surrounded by hills and trees which faces high risks of wildfire especially during the period of Ching Ming Festival and Chung Yeung Festival.
On the late evening of 25 October 2020, a wildfire ignited the day before was spreading out and was just meters away from the center. Nearly 300 people heeded an online call for volunteers and rushed to rescue the animals in the center. Luckily, all animals were rescued and kept safe during the evacuation. But it highlighted for the need to have a solution to avoid this kind of incident from happening again in the future.
An early wildfire detection solution had been suggested to Big Tree. It included AXIS Q6215-LE PTZ Network Camera and AXIS P1378-LE Network Camera together with RoboticsCats AI-Cloud wildfire detection SaaS (Software-as-a-Service). Alerts with smoke images are automatically sent to the Big Tree staff members if there is a potential risk of wildfire. The staff can take timely action to save the animals if there is wildfire nearby.
Reliable surveillance around the clock
AXIS Q6215-LE PTZ Network Camera is a reliable and robust network camera specially designed with high precision 360º pan, -/+90º tilt and long-range IR to cover wide and long-distance surveillance which is perfect to perform the 7/24 patrol monitoring 1 km surrounding the center. Guard tour can be performed or it can remotely control the direction and angle of the network camera according to the needs of the center. AXIS P1378-LE Network Camera provides 4K image quality surveillance around the clock on specific high risk area. The smoke can be detected as far as 7 km away from the center.
Automatic and efficient wildfire alert to staff for timely action
By setting the action rules in the two network cameras, the videos are transmitted to RoboticsCats AI-Cloud via 5G router regularly. If AI-Cloud detects potential wildfires, alerts and smoke images will be automatically sent to the Big Tree staff members with the ReportFires mobile app. The staff can take timely action to save the animals if there is wildfire nearby.
Robust operation under extreme weather conditions
Since both network cameras were installed in a remote outdoor area, it was important to minimize the maintenance need. “AXIS Q6215-LE PTZ Network Camera is a heavy-duty network camera which can withstand winds up to 245 km/h. The vandal-resistant IK10-rated casing and IP66/IP68 ratings will protect the camera from harsh weather conditions and impacts. With built in wiper, the staff can remotely clean the network camera to ensure that useful images can be obtained even in heavy rain,” said Bede Lau, Key Account Manager at Axis Communications Hong Kong.
Excellent image quality regardless of light conditions
“Big Tree is surrounded by hills and trees and rarely has light during the night time. Both network cameras feature Axis Lightfinder technology which can deliver high-resolution, full color video even in near darkness. This technology can help recognize and identify potential wildfires in large open areas even in poor light or complete darkness,” said Peter Chiu, Senior Key Account Manager for Hong Kong, Macao and Mongolia.
“This early wildfire detection solution acts like a virtual security guard which provides non-stop surveillance for the center around the clock. The network cameras provide 24-hour patrol around the center. Although we cannot stay in the center throughout the day, we can still feel safe and secure by this remote and proactive monitoring solution,” said Jojo Chan, Volunteer, Big Tree Animal Sanctuary and Adoption Center. Big Tree can now be monitored remotely and more efficiently detect the potential risk of wildfire 24 hours a day, 365 days a year, keeping the animals safe and secure in the center.
Despite their advantages, cloud-based ANPR solutions are not without limitations or challenges in their implementation. These are mostly technical but can, at times, be the result of budgeting concerns as well. There are several factors that a customer should be aware of before installing an ANPR solution.
The most obvious limitation is that analytics on the cloud may not be suitable for solutions that require immediate actions taken based on the insights. For instance, a solution that opens a gate or boom barrier to a parking lot after identifying the user through ANPR needs to open gates within seconds. Failure to do this will cause in irate users or customers. You cannot afford to have network-related delays here, and hence, solution on the edge works better.
Cloud-based solutions – as the name suggests – require an Internet connection to work, so they may not be feasible for projects where having an Internet connection is out of the question.
“Event detection is also an issue in case of cloud-based ANPR, which can be avoided if the bandwidth allows for a continuous video stream of the monitored areas (as in this case our solution uses video analytics-based triggering), but this is not yet realistic in terms of accessible infrastructure and also when considering economic reasons,” explained Adrian Cseko, Head of Sales at Asura Technologies. “Another way to tackle the problem is either an image pre-selection mechanism set in place, triggers (like inductive loops) or cameras that include some sort of triggering mechanism, the latter solution, however, may prove more costly due to the price difference compared to standard IP cameras that provide sufficient image quality for ANPR.”
The basic rule of the thumb when selecting a camera for ANPR is that if a license plate is recognizable to a human eye, an ANPR solution will recognize it too. Having said that, since cameras on highways often have to deal with rugged conditions, there are several instances where they failed to give continuous clear visuals.
“Camera image quality is essential during all weather and lighting conditions,” said Walter Verbruggen, Sales Director at Avutec. “A dedicated ANPR camera system will always outperform another type of camera, as it is optimized for ANPR image quality, offers more speed and does not require any image or video compression, compromising the image quality.”
Camera placement is also essential to capture license plates. Too far or too near would result in images that are not useful. Similarly, installing cameras too high or too low would also be problematic. Finally, when the visuals are blurred because of rain, fog, dust, or other such elements, the ANPR solution would have difficulty recognizing the plates. It should be noted that these kinds of problems pose a challenge to any ANPR solution regardless of whether they are located on the cloud or the edge. Read about how to install an ANPR camera here.
Lack of customizability
Gabor Jozsa, CMO at Adaptive Recognition issues like network connectivity, is not limited to ANPR but could define any cloud-based system. However, a more significant limitation is the difficulty of providing services that may need certain unique recognition features.
“Limitation in customized features is an issue,” Jozsa said. “Sometimes, the customer’s application requires specific OCR engines and recognition functions which can be provided perfectly with our on-premise solutions”
Limitations vs. advantages
To conclude, both cloud-based and edge-based ANPR solutions have their advantages and disadvantages. Both are suited for different verticals and applications, which makes comparison difficult as well. Any decision to purchase from customers should be based on the application.
กล้องตรวจจับอุณหภูมิความร้อนในร่างกายรุ่นใหม่ (Thermal Camera) เหมาะสำหรับติดตั้งในองค์กรธุรกิจเอกชนและภาครัฐ เพื่อช่วยสแกนหาผู้ป่วยที่มีไข้ ไม่ว่าจะเป็นพนักงานหรือลูกค้า กล้องตรวจจับอุณหภูมิความร้อนในร่างกายมนุษย์นี้ แสดงถึงการถ่ายภาพความร้อนปัญญาประดิษฐ์รุ่นใหม่ล่าสุดซึ่งถูกออกแบบมาเพื่อตรวจจับอุณหภูมิของร่างกายที่แม่นยำ การสแกนหาบุคคลที่มีอุณหภูมิร่างกายสูง สามารถช่วยระบุอาการเริ่มแรกของไวรัส ก่อนที่จะมีโอกาสแพร่กระจายต่อไป กล้องตัวนี้สามารถตรวจเช็คอุณหภูมิของร่างกายได้อย่างแม่นยำ ถึงแม้จะเดินด้วยความเร็วปกติในขณะที่สวมหน้ากาก, หมวก, และแม้กระทั่งหมวกกันน็อก กล้องนี้ช่วยเพิ่มความรวดเร็วในสถานที่กระบวนการตรวจคัดกรอง แจ้งเตือนด้วยเสียงและภาพเมื่ออุณหภูมิสูงกว่าปกติ เพื่อให้เจ้าหน้าที่สามารถคัดกรองได้อย่างรวดเร็ว ป้องกันการแพร่กระจายของไวรัสจากการแพร่กระจาย
กล้องตรวจวัดอุณหภูมิมนุษย์ ตรวจวัดอุณหภูมิสแกนบุคคลได้อย่างรวดเร็ว ไม่ต้องสัมผัส เมื่อเดินผ่านกล้อง
ตรวจ คัด กรอง ได้อย่างมีประสิทธิภาพ สามารถช่วยป้องกันการแพร่กระจายของโรค
ด้วยการสแกนผู้คนอย่างรวดเร็วและไม่ต้องสัมผัสกล้องในการสแกนอุณหภูมิร่างกายนี้สามารถเพิ่มความเร็วในสายการผลิต ที่โรงงาน และร้านค้าต่างๆ
ระบบคัดกรองอุณหภูมิร่างกาย สามารถคัดกรองอุณหภูมิร่างกายของบุคคลได้อย่างรวดเร็วและมีความแม่นยำสูง โดยมี Application Programming Interface (API) ที่สามารถเชื่อมต่อกับระบบซอฟต์แวร์ขององค์กรได้ เหมาะกับการใช้งานสำหรับองค์กรขนาดใหญ่ที่ต้องการเชื่อมต่อข้อมูลอุณหภูมิร่างกายของพนักงานหรือผู้ใช้งานที่ผ่านการลงทะเบียนเข้ากับระบบโปรแกรมบุคคล และเหมาะสำหรับใช้บริการคัดกรองไข้ (Fever Screening Service) ในพื้นที่อาคารสาธารณะ อาทิเช่น โรงพยาบาล, สถานีรถไฟฟ้า, สถานีเดินรถ, ห้างสรรพสินค้า และอาคารสำนักงานขนาดใหญ่
โซลูชั่นกล้องตรวจจับอุณหภูมิอัจฉริยะสำหรับรับมือสถานการณ์โควิด-19 ตัวกล้องมาพร้อมเทคโนโลยีตรวจจับอุณหภูมิร่างกายความแม่นยำสูง และทำงานได้อย่างรวดเร็ว สามารถตรวจจับอุณหภูมิได้พร้อมกันถึง 15 คนต่อวินาที ตรวจวัดความร้อนในระยะไกลได้ถึง 3 เมตร ติดตั้งเซ็นเซอร์ และลำโพงมาในตัวกล้อง แจ้งเตือนทันทีเมื่อตรวจพบอุณหภูมิร่างกายเกินค่าที่กำหนด ค่าความคลาดเคลื่อนเพียง 0.3 องศาเซลเซียส (+/-) และยังทำงานควบคู่กับระบบรักษาความปลอดภัยในรูปแบบกล้องวงจรปิด สามารถตรวจจับใบหน้า บันทึกได้ทั้งภาพนิ่ง และวิดีโอ เก็บค่าอุณหภูมิ และบันทึกบุคคลผ่านเข้าออก ด้วยการทำงานแบบเรียลไทม์ สามารถตรวจสอบข้อมูลย้อนหลังได้
นอกจากนี้ยังช่วยลดความเสี่ยงเจ้าหน้าที่ในการสัมผัสเพื่อวัดอุณหภูมิกับผู้ป่วยโดยตรงอีกด้วย เป็นทั้งโซลูชั่นเพื่อการเฝ้าระวังด้านสุขภาพ (Smart Health) และระบบรักษาความปลอดภัย (Smart Surveillance) ในหนึ่งเดียว สามารถช่วยคัดกรอง เพื่อป้องกันการติดเชื้อ และแพร่กระจายของสถานการณ์โควิด-19 ได้อย่างมีประสิทธิภาพ
Building automation market has achieved significant growth in recent years, driven by a combination of decreased costs and increased awareness of the benefits. Smart building automation systems, which refer to the automated centralized control of a building’s heating, ventilation and air conditioning, lighting and other systems through building management or automation system, increases convenience, optimizes resource usage and has the potential to lower operational costs.
According to the research firm Stratistics MRC, the global building automation market accounted for US$ 57.83 billion in 2017 and is expected to reach $154.36 billion by 2026 growing at a CAGR of 11.5 percent during the period. Given this strong growth potential and the prospects of high returns, several companies are competing for market share. We compile a list of some of the top building automation companies to watch out for in 2020.
With a presence in several industries and over a century of market experience, the German company Bosch is one of the major players in the building automation sector. Under its building solutions division, Bosch offers integrated solutions for buildings, offering consultancy, installation and services for design and operation. Further, its Building Integration System (BIS) software solution helps in the management of different Bosch security subsystems including video surveillance, fire alarm, access control, and public address of intrusion systems into an integrated platform.
Another German company, with several years of experience under its belt, is Siemens. In a whitepaper, the company had said that the commitment and vision that Siemens has to the evolution of smart buildings is demonstrated by the investment the business has made in the digitalization agenda.
“This includes the acquisition of three innovative start-up companies specifically to strengthen its portfolio of solutions for smart buildings,” the company said. “These strategic investments – the purchase of Comfy by Building Robotics (provider of a building occupant app), Enlighted (provider of sensor and building analytics) and J2 Innovations (building automation and operating system vendor) – demonstrate a commitment towards staying at the leading edge of technology.”
The US-based Honeywell International is present in industries ranging from aerospace to physical security and consumer appliances. In the building automation space, the company has its Niagara Framework-based building management solution (BMS) that takes all aspects of your building and occupant needs into consideration to maximize energy efficiency and make the management of your facility simpler and more user-friendly.
The company’s software solution Vector Space Sense also helps customers understand how a particular building is being used, real-time, allowing management to take full advantage of the resources available.
The Irish multinational conglomerate offers a building automation system branded as Metasys, that connects your HVAC, lighting, security, and protection systems – enabling them to communicate on a single platform to deliver the information you need, helping customers to make smarter, savvier decisions while enhancing the occupants’ comfort, safety and productivity. Other solutions that Johnson Controls offers include BCPro, for Asian and Middle East Markets, and Verasys, a plug-and-play system for light commercial buildings.
The France-headquartered Schneider Electric offers solutions in a wide range of areas ranging from home automation to industrial safety systems and electric power distribution. According to a report by Technavio, Schneider Electric’s EcoStruxure Building solution is one of the first open innovation platforms for buildings with end-to-end IP architecture enabling quick connectivity of IoT devices to improve building value offering.
Source: Prasanth Aby Thomas, Consultant Editor
When the India-based pharmaceutical giant Cipla considered ways to improve their manufacturing process, a major problem that caught their attention was that the machine vision cameras they used to quality-check finished products couldn’t quite identify transparent capsules. They also found that the solution they used then had difficulty in identifying dusty tablets. Such things could prove to be critical in ensuring operational efficiency, quality control and deciding production costs.
This is where Spookfish Innovations that develops machine vision solutions for manufacturing units in the pharmaceutical sector came into the scene. Cipla was clear on what they wanted. Solve these specific pill-identification problems of theirs. Spookfish, with its computer vision and machine learning algorithms, was able to do exactly that, saving a significant headache for their customers.
This is one of the many examples of how machine vision is transforming the manufacturing industry. The market is huge, as Anupriya Balikai, MD of Spookfish that has offices in Bristol and Bangalore explains.
“There is tremendous potential to bridge the in the manufacturing sector for machine vision,” Balikai said. “Just to give you an example, say you have a new product that is developed. Your machines would need a change in settings to inspect this new product and in the past, you would need an intelligent operator to change these settings. With machine vision and machine learning, you no longer need an operator, automatic learning algorithms would suggest what to change and how to change. ”
How it works
Technically, cameras by themselves just capture images, explained Rick Brookshire, Director of Product Development at Epson America. The so-called “smart cameras” have processors in them for vision processing. Vision systems and AI become significant when deciding what can be done with the captured visuals.
“For example, when training parts for recognition, AI can be used to look at hundreds of parts to define a more accepting model of a good part,” Brookshire said. “At Epson, we use Epson Vision Guide in combination with our IntelliFlex parts feeding system to auto-tune the feeder as well as determine optimal part quantities in the feeder system to maximize throughput. Other examples are where deep learning algorithms are used to help find defects.”
Elaborating on this point further, Shweta Kabadi, Senior Director and Business Unit Manager of Vision SW and Accessories at Cognex, listed the major role machine vision plays in the industrial vertical.
“AI-enabled cameras are used to perform four primary roles in factory automation: guiding, identifying, gauging and inspecting products,” Kabadi said. “Examples of guiding applications could include aligning a screen on a smartphone or guiding a robot to put a windshield in a car. Examples of identifying applications could include reading bar codes behind shrink wrap on a pallet, identifying laser-etched codes on metal pots or detecting components against noisy backgrounds with confusing patterns and glare.”
Actions such as measuring the width and depth of a brake pad as it moves on a conveyor belt are instances of machine vision being used in gauging applications. Identifying cosmetic defects, missing pieces and irregularities on finished products or components are examples of the technology being used in inspection purposes. This could include inspecting for potentially hazardous deformations on lithium-ion batteries as well.
Benefits of machine vision in factories
AI-enabled cameras allow manufacturers to perform critical functions without making contact with the product or slowing down their lines. They can inspect hundreds, or even thousands, of parts per minute, far exceeding the inspection capabilities of humans. They can also inspect object details that are too small to be seen by the human eye.
Source: Prasanth Aby Thomas, Consultant Editor
Axis Communications announces the release of AXIS companion software for simple, secure and reliable video management.
This easy-to-use video surveillance solution is optimized for small systems up to 16 cameras and is ideal for small businesses needing to monitor their premises, people and assets. AXIS Companion software is intuitive and easy to operate, letting users quickly learn to navigate the system with minimal instruction. Alert notifications keep business owners aware of any suspicious activity and can be customized to suit the customer’s business needs.
The new software includes 3 different levels of multi-user support (Administrator, Operator, and Viewer) making it easy to ensure every user has access to what they need. For instance, it’s possible to grant administrators access to everything while only allowing other users to access things like PTZ control and video playback. Additionally, Axis Secure Remote Access technology allows users to access live or recorded video on a mobile device or PC without the need for network or router configuration. And, thanks to Axis Remote System Management it’s possible to restart or upgrade devices and manage user permissions without being physically onsite and in many cases, resolve the issue right there on the spot.
Key features include:
This cost-effective solution is easy to setup, which helps ensure every installation is trouble-free and reduces the cost of training and support. Furthermore, it helps increase speed and efficiency by eliminating lengthy waiting periods and reducing system downtime.
Source: Axis Communications Date: 2019/10/21
With rapid urbanization and increased population density in cities, there is a heightened need for mobility solutions. Private vehicles are a preferred mode of transportation for many people in developed economies. As the standard of living continues to go up in several parts of the world, more and more people and companies buy new cars.
This has brought with it the challenge of creating space to park these cars in cities. The concept of the parking lot has evolved quite a bit over the years from just a place where people could leave their cars to places that are managed by automated solutions to ensure security and operational efficiency.
Malls and other commercial centers are also increasing in cities, attracting more and more people who prefer to drive in with their cars. This has increased the need for efficient parking lot management system in malls, not just to make sure people have a hassle-free experience but also to avoid wasting money and resources.
Nevertheless, there are several challenges that mall management and solution providers face when it comes to managing parking lots. Some of these challenges are the reason automated systems have come into place. Others persist despite their introduction.
Manual ticketing is time-consuming
Before venturing into the realm of automated parking lot systems, let’s take a look at why electronic solutions should be used. Manual ticketing systems take up time and require more manpower, resulting in higher costs and slower processing.
While this may be seen as an obvious issue to many, the fact is that there are still several malls and commercial entities across the globe that are yet to make a shift from manual ticketing systems.
Paper-based ticketing systems also make the job of information management difficult. In case of any untoward incidents, the management should be able to provide information about any vehicle parked in their space immediately. Automated electronic systems make this possible.
Access control ticketing system failure
One of the worst nightmares for a parking lot manager is the malfunctioning of any access control system. Since malls are often open for long hours and mostly every day of the week, parking lots will be in use most of the time. If there is any failure to the entry management system, there could be delays customer upset.
False damage claims
According to Arvind Mayar, CEO of Secure Parking Solutions, there are always some customers who try to claim that their car was damaged while in the parking lot when in reality the car was already damaged before entering the lot.
To deal with such an issue, there is a need for high-quality surveillance solutions that can provide clear images of the condition of a vehicle at the point of entry. Adequate lighting is also required to support the surveillance systems that are being installed.
Installing new parking solutions at existing malls and shopping centers is a challenge. But perhaps what’s even more difficult is the integration of these solutions into third-party systems. For instance, surveillance and fire may be managed by a different vendor. Unless all the companies involved are willing to support integration, operations could be tough.
Open standards for traffic data exchange like Datex II become relevant in this context. Fortunately, major companies do support such standards. For instance, Siemens’ intelligent parking solution offers links to third-party applications via open standards such as DATEX II. This interface can allow integration of the data produced by the system for payment providers, enforcement and in-vehicle platforms that consume data in order to provide services that add value to the infrastructure in place.
Source: Prasanth Aby Thomas, Date: 2019/06/21
The hardware needs of businesses to deploy a face recognition solution can vary depending on the application. Not every situation requires the highest resolution camera or the highest computing power, nor does every every environment pose the same challenges (e.g., lighting, crowding, weather, etc.).
Generally, in order to deploy a face recognition system what is needed are a well-tuned camera, local compute power, and software. Hardware systems must be paired with the appropriate compute power to run facial detection efficiently, which depends on whether you are managing a high- or low-density environment.
However, hardware requirements can vary greatly depending on the application and deployment architecture. For example, secure-access use cases, where you are viewing a few faces at a given time, can leverage lower-resolution cameras with lower frame rates and require less compute power (in addition to deploying fewer cameras), which effectively lowers your total cost of ownership (TCO), explained Dan Grimm, VP of Computer Vision and GM of SAFR and RealNetworks.
On the other hand, when using watchlists, deploying more cameras can improve accuracy and
performance. Grimm added, “If the facial recognition platform supports a distributed architecture by doing detection at the edge and recognition in the cloud, then you’ve not only lowered TCO, you’ve also increased your ability to scale in a massive way.”
In the early days of face recognition, there was often a tradeoff between accuracy and device power. “Lower powered devices, either in terms of chipset, bandwidth requirements or camera resolution, suffered from lower accuracy,” noted Doug Aley, CEO of Ever AI.
Today Ever AI has had success in being able to deploy on everything from a single core legacy processor all the way up through a cluster of high-powered GPUs, like an NVIDIA T4. “There are now other companies like ours where the accuracy tradeoff is no longer an issue,” Aley added.
Nowadays, speed is where the major variability comes in — the more powerful the hardware, the faster the speed of matching and the faster the overall user experience.
Aley explained that most modern chipsets, especially from a quad-core onward, are going to be very fast. Furthermore, today’s face recognition models, and the frameworks off which these models are built, are getting more adept at handling lower-power chipsets.
Shawn Mather, Director of Sales for the U.S. at Intelligent Security Systems (ISS) highlighted software integration issues over complications with hardware. Software providers, however, can overcome these challenges by making their solutions compatible with VMS solutions and electronic access control solutions.
The type of face recognition — 2D or 3D face recognition technology — a businesses chooses to deploy may also come with its own specific set of challenges and requirements. A report by MarketsandMarkets noted that captured images from earlier 2D face recognition technology was highly dependent on illumination, meaning poor lighting significantly affected image quality. Another challenge was the “incompatibility of integration between software tools and biometric hardware devices.”
However, the report expects 3D technology to have the largest market share in the coming years. Unlike 2D technology, 3D technology is not dependent on illumination. This enables it to capture higher-quality images in uncontrolled environments, such as poorly lit or completely dark areas.
Something else to consider in the years to come are face recognition cameras, where the recognition process is done on-board at the frontend. These types of cameras, though, require strong computational power since all of the tools for recognition are on-board. While several camera companies are developing face recognition cameras, the overall market is still in a fledgling state, but may be something to look forward to in the future.
Source: Eifeh Strom, Date: 2019/06/21
The first step a machine vision system will take to understand images collected by cameras is to adjust these images through processes such as sharpening, cutting or zooming. This processing provides meaningful information for computers to read.
As humans, we have a set of eyes capturing images, which then are sent to the brain for image identification. For machines, cameras and other visual sensors perform the function of the eyes, with software, artificial intelligence, FPGA (Field Programmable Gate Arrays) chips, CPUs and GPUs filling in for the brain.
“Image processing can be seen as the first step in analyzing video data, before it is fed to the system’s computer vision algorithms,” said Jerome Gigot, senior director of marketing at Ambarella.
Processing software can sharpen an image to improve readability, change the exposure for a clearer shot, or to zoom in and crop certain information, such as a barcode or address located on a package.
“The type of data that will be analyzed heavily depends on the manufacturing function that needs to be performed,” said Gigot.
Industrial objects, for instance, can be inspected by size, shape, color, and texture. These same variables can be also used to recognize agricultural or biological objects.
The second step is to have an algorithm that first distinguishes between the many different pieces of an image, then identifies the edges and models its subcomponents.
In manufacturing, computer vision isn’t limited to a single niche purpose. Some decode barcodes, while others inspect for defects. The latter is powered by neural networks that can compare how a piece of equipment looks versus how it is supposed to look. When the algorithm finds an anomaly, it flags the issue for the user. Other possibilities include monitoring, predictive maintenance, safety inspection and inventory management.
Gigot offers the example of food processing. At a food processing plant, a neural network detects and instructs the system to remove bad apples in real time as they speed through the scanner and before they shipped out to stores.
Seeing beyond vision with predictive capacity
Lian Jye Su, Principal Analyst, ABI Research
“In addition to cameras, machine learning-based machine vision can also
incorporate data collected from various sensors, including LiDAR, radar, ultrasound, and magnetic field sensors. The rich set of data will provide further insight into other aspects of production processes,” said Lian Jye Su, Principal Analyst at ABI Research.
Conventional machine vision only detects product defects and quality issues predefined by humans. With the help of machine learning algorithms, machine vision can pick up unexpected product abnormalities or defects, providing flexibility and valuable insights for manufacturers.
Machine vision-powered predictive maintenance utilizes machine learning and other connected devices to monitor data and components in order to taking corrective actions before machinery breaks down. It creates a zero-downtime situation for manufacturers, creating cost savings.
Another use of machine learning-equipped machine vision systems is for monitoring worker safety. Devices can track people and predict the movement of equipment, helping to prevent dangerous interactions between people and machines.
Source: Elvina Yang, Date: 2019/06/20