machine vision Archives - Getbutterfleye https://www.getbutterfleye.com The world of smart cameras Mon, 14 Aug 2023 08:49:02 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 https://www.getbutterfleye.com/wp-content/uploads/2022/04/cropped-logo1-32x32.png machine vision Archives - Getbutterfleye https://www.getbutterfleye.com 32 32 The Data-Driven Future of Car Collisions: Machine Vision’s Insights https://www.getbutterfleye.com/the-data-driven-future-of-car-collisions-machine-visions-insights/ Mon, 14 Aug 2023 08:48:59 +0000 https://www.getbutterfleye.com/?p=299 In a world increasingly reliant on data-driven solutions, the realm of automotive technology has witnessed a remarkable transformation with the advent of machine vision. This cutting-edge technology, powered by advanced sensors and artificial intelligence, is revolutionizing the way car collisions are analyzed, assessed, and repaired. The marriage of machine vision and collision analysis holds immense […]

The post The Data-Driven Future of Car Collisions: Machine Vision’s Insights appeared first on Getbutterfleye.

]]>
In a world increasingly reliant on data-driven solutions, the realm of automotive technology has witnessed a remarkable transformation with the advent of machine vision. This cutting-edge technology, powered by advanced sensors and artificial intelligence, is revolutionizing the way car collisions are analyzed, assessed, and repaired. The marriage of machine vision and collision analysis holds immense promise for enhancing road safety and optimizing the repair process. In this article, we delve into the data-driven future of car collisions, exploring how machine vision offers invaluable insights for auto body shops and insurance companies. With a focus on Vancouver’s auto body shop landscape, we’ll uncover how local businesses are leveraging this technology to streamline their operations.

The Power of Machine Vision in Car Collisions

Machine vision, a subset of artificial intelligence, empowers machines to interpret visual information from the world around them. In the context of car collisions, this technology involves equipping vehicles with advanced sensors and cameras capable of capturing high-resolution images and videos in real-time. According to official statistics from the Canadian Institute for Health Information (CIHI), car accidents in Canada lead to thousands of hospitalizations and deaths annually. The integration of machine vision technology aims to significantly reduce these numbers by providing a more comprehensive understanding of collision dynamics.

Data Collection and Analysis in Real-Time

One of the most remarkable features of machine vision is its ability to collect and analyze data in real-time. During a car collision, machine vision-powered sensors capture a wealth of information, including images, videos, and sensor readings. This data is then processed by sophisticated AI algorithms that can provide immediate insights into the nature and severity of the collision. Such rapid analysis is instrumental in informing first responders, auto body shops, and insurance companies about the necessary actions to take.

Machine Vision’s Role in Precise Damage Assessment

Accurately assessing the extent of damage after a collision is a critical aspect of the repair process. Machine vision’s AI algorithms excel at identifying and categorizing damage based on the collected data. This capability streamlines the workflow of auto body shops by providing a detailed understanding of repair requirements even before the vehicle arrives at the shop. This not only saves time but also ensures that the necessary tools, parts, and expertise are readily available, expediting the repair process.

Estimating Repair Costs with Machine Vision

Another significant advantage of machine vision technology is its contribution to more accurate repair cost estimates. Traditionally, estimating repair costs has been a complex task, influenced by multiple variables. Machine vision’s data-driven insights enable auto body shops to provide customers with more transparent and precise cost estimates. According to data from the Insurance Bureau of Canada, motor vehicle collision claims accounted for billions of dollars in payouts in recent years. By leveraging machine vision, insurance companies can collaborate with auto body shops to streamline the claims process, reducing costs and delays.

Predictive Insights: Anticipating Accident Patterns

The predictive capabilities of machine vision extend beyond individual collision analysis. By analyzing historical data and identifying patterns, this technology can anticipate accident-prone areas and times. With Vancouver’s diverse driving conditions and traffic challenges, such insights hold immense potential for improving road safety. This could lead to targeted interventions and better traffic management strategies, ultimately reducing the frequency and severity of collisions.

Vancouver’s Auto Body Shop: Leveraging Machine Vision for Efficiency

In the heart of Vancouver’s bustling auto body shop scene, establishments like Grandcity Auto Body Shop in Vancouver are embracing the benefits of machine vision. The collaboration between local auto body shops and technology providers like Butterfleye has paved the way for a more efficient collision repair process. By integrating machine vision’s data-driven insights, these auto body shops can assess damage accurately, estimate repair costs transparently, and provide timely services to their customers.

Challenges and Considerations

Implementing machine vision in collision analysis is not without challenges. Factors such as varying lighting conditions, environmental variations, and data privacy concerns need to be addressed. However, technology providers like Butterfleye are continuously advancing their solutions to overcome these obstacles and ensure reliable performance even in diverse conditions.

The Road Ahead: Innovations and Collaboration

Looking ahead, the future of car collision analysis is brimming with innovation and collaboration. Auto manufacturers, insurance companies, and technology providers are increasingly collaborating to harness machine vision’s potential. These collaborations aim to enhance road safety, optimize repair processes, and ultimately create a more secure and efficient driving experience.

Conclusion: Embracing Data-Driven Transformation

In conclusion, the convergence of machine vision and car collisions is reshaping the automotive landscape. The data-driven insights offered by this technology hold the key to reducing accident-related casualties and optimizing repair operations. As Vancouver’s auto body shops join hands with machine vision pioneers, we’re witnessing a promising transformation in the way collisions are understood, assessed, and repaired. With a commitment to innovation and collaboration, the road ahead is illuminated by the potential of data-driven transformation.

The post The Data-Driven Future of Car Collisions: Machine Vision’s Insights appeared first on Getbutterfleye.

]]>
Shutter, difference between interfaces https://www.getbutterfleye.com/shutter-difference-between-interfaces/ Fri, 03 Dec 2021 17:30:00 +0000 https://www.getbutterfleye.com/?p=55 The type of shutter should be chosen based on the task the camera is trying to accomplish

The post Shutter, difference between interfaces appeared first on Getbutterfleye.

]]>
The type of shutter should be chosen based on the task the camera is trying to accomplish. The shutter prevents light from reaching the camera’s sensor and only opens for the duration of the exposure. The difference between a global shutter and a sliding shutter is how they provide the exposure.

Direction of motion.
The global shutter opens fully to allow light to reach the entire surface of the sensor. It would be the best choice in areas where you want to shoot fast-moving subjects.
The sliding shutter ensures a line-by-line exposure of the sensor. Depending on the exposure time set, it may cause distortions in the image if the subject moves while the shutter is open – the so called “floating shutter effect”. But, as a rule, cameras with sliding shutter are cheaper.

Thus, it makes more sense to use a global shutter for moving objects, and a sliding shutter for static objects.

What is the difference between interfaces?
The interface is the link between the camera and the calculator, which is responsible for the image processing (with the help of software). In order to choose the right interface for a specific machine vision system, an optimal combination of performance, cost and cable length has to be chosen.

Depending on existing requirements, GigE, USB 3.0, or Camera Link can be selected, whichever offers the best set of features for fast, reliable and secure transfer of images from the camera to the computer.

FireWire and USB 2.0 are aging technologies which, due to their limitations, are no longer recommended for today’s imaging systems.

GigE is currently the most common interface. It allows data to be transferred over long distances (up to 100 m) at high speeds, without loss.

Machine vision cameras also differ in appearance, size, types of connectors and bayonet. When using the camera outdoors or in dusty industrial environments, pay attention to the protection rating. For example, an IP67 rating means that the device is completely dustproof and protected against partial or brief immersion in water. In other words, it prevents the penetration of water in dangerous quantities if the camera is submerged in water for a certain time at a certain pressure. The industrial version makes the chamber more expensive, but more versatile. This is important if you plan to use it in production, outdoors or in dusty areas without additional covers. Also, for embedded systems the dimensions and weight of the camera are especially important, for such tasks specially designed housingless camera modules.
Bayonet is a type of lens mount. C-mount, CS-mount and S-mount are widespread. Be sure to check the compatibility of the lens and camera.
In addition, there are special cameras for specialized tasks, such as HDR or polarizing cameras.

To recap
As a result of reading this article, you should have formed a rough list of requirements for your future camera:

Type of camera (matrix or linear);
Image type (monochrome, color, multispectral or hyperspectral);
Image resolution;
Frame rate and exposure;
Shutter type (global or sliding);
Camera interface.
Protection class (IP7) and industrial design.
Based on this information, you can use the filter in the catalog to determine which cameras are suitable for your application.

The post Shutter, difference between interfaces appeared first on Getbutterfleye.

]]>
Machine vision components https://www.getbutterfleye.com/machine-vision-components/ Thu, 28 Oct 2021 16:47:00 +0000 https://www.getbutterfleye.com/?p=26 Machine vision systems are usually divided into two independent subsystems

The post Machine vision components appeared first on Getbutterfleye.

]]>
Machine vision systems are usually divided into two independent subsystems:

Image Capture;
Image processing and analysis.
Each of them, in its turn, includes a different set of components depending on the requirements of a particular application task. Image processing and analysis subsystem consists of hardware and software components.

Hardware – calculator, built on the basis of a PC or specialized equipment, designed for image processing.

Software – special software containing mathematical algorithms of data processing. These may be classical mathematical algorithms or neural networks. The task of the developer is to choose the types of algorithms and their sequence. However, in order to start processing images, they must first be acquired.

The image capture subsystem consists of one or more cameras, optics, illumination and – most often – a sensor or encoder. Machine vision cameras usually have multiple digital lines for synchronization with sensors, controllers, illuminators, etc.

There are also so-called “smart cameras” that contain all the main components (camera, optics, illumination, calculator) in one housing.

The image – an array of pixel values, or a cloud of points, in the case of multidimensional representations – can be obtained by different equipment: digital camera, thermal imaging camera, laser 3D scanner and others. And the choice may not be limited to one type of device. The way of the solution of the set task, correct selection of components and choice of platform for processing is defined by the developer of the system.

A digital camera can be network (IP), matrix or line camera, color, multispectral, hyperspectral or monochrome, with different resolution and pixel size. Sometimes you have to sacrifice resolution in favor of pixel size, and sometimes a small pixel may be preferable. Depending on the type of camera and the subject under study, the optical subsystem and lighting are selected. It is equally useless to use a good, expensive camera with a mediocre, cheap lens and vice versa. In the article “Choosing a Camera” you can learn more about the main aspects.

In the picture on the right you can see that the resolution available is not enough to read the markings on the surface of the box. So, one of the two components (camera/optics) is selected incorrectly.

Lenses can be of different focal lengths: wide-angle, macro; variable focal length, telecentric, special for “peeking” into the tube (endoscopic) or 360° coverage. Also, when choosing a lens, pay attention to the mount: C, CS, S mount and others.

The illumination can be constant or pulsed. Impulse illumination is often used to take pictures of fast-moving objects in order to be able to work at short shutter speeds and get a clear image regardless of the speed of the object. The type of illumination can be linear, circular, background, structured or it can be a laser. The wavelength of light can be red, green, blue, or from the infrared and ultraviolet range. In addition, various combinations of all of the above options are possible. However, it is worth noting that selecting lighting and its location relative to the object is often a more difficult task than choosing a camera. We will tell you about all the subtleties of light selection in the article “Can’t do without light”.

Choosing the wrong technical solution for getting the image is very difficult, and in some cases impossible, to compensate by the most sophisticated mathematical algorithms.

When selecting all the components of a machine vision system on your own, pay attention to two important points:

Careful matching of all system components is necessary, because the weakest link in a complex technological and functional chain limits overall performance. Using the wrong component (such as a “cheap” lens on a high-quality camera) can compromise the functionality of the entire system.

The right camera, optics and illumination are only a small part. Computing power and specialized software packages are important aspects of a machine vision system, without which proper functionality is impossible. It is essential that the computational capabilities provide the required system performance. A mistake in the choice of computational power will affect the speed of image processing. In addition, the higher the resolution of the camera, the more processing power of the system should be. Therefore it is not necessary to chase megapixels, because most often even a 2 megapixel camera can perform the task.

For selection of machine vision system components we recommend to contact our specialists. To assist you with the selection of equipment, please fill out the questionnaire or click on the “Select Components” button.

The post Machine vision components appeared first on Getbutterfleye.

]]>
Why machine vision systems are needed https://www.getbutterfleye.com/why-machine-vision-systems-are-needed/ Wed, 14 Jul 2021 17:32:00 +0000 https://www.getbutterfleye.com/?p=58 The term "machine vision" is usually used when describing systems and technologies used in industrial automation, i.e. where "machines" in their broadest sense are used: machinery as mechanisms or devices that perform some work

The post Why machine vision systems are needed appeared first on Getbutterfleye.

]]>
The term “machine vision” is usually used when describing systems and technologies used in industrial automation, i.e. where “machines” in their broadest sense are used: machinery as mechanisms or devices that perform some work.

The term “computer vision” (CV) implies the use of a computer machine – a computer as the main element of such systems. Computer vision systems find application not only in industry (technology), but also in medicine (erythrocyte counting, iridodiagnostics, etc.), in security and safety tasks (license plate recognition, face recognition) and others. The main emphasis in computer vision is placed more on the algorithmic part, mathematics, rather than on the areas of its practical application.

In common practice, the word “machine” is associated with the word “automobile” rather than with the more general term “machinery. For example, a washing machine, but a household appliance. This is probably why we have another interpretation free of this ambiguity: technical vision. In our opinion, it is fully analogous to the English term “machine vision” and can be used on a par with the latter to define industrial systems that use vision in all its manifestations. Below we will talk about machine vision or technical vision systems.

Machine vision systems are becoming more and more accessible day by day, which is the reason for their widespread introduction in various areas of production. If you too are considering using a machine vision system, answer the simple question, “What do you need it for?”

There are several options:

Increasing production efficiency;
Product quality management;
Reducing human resources;
Increased accuracy of measurements;
Increasing the output of products.

The main directions of machine vision use:

Finding defects;
Control of shape and geometry;
Product sorting;
Robotics;
Logistics; Stacking;
Label reading;
Output inspection;
Shooting in hazardous or inaccessible areas.

During the operation of a machine vision system, an image is captured, and it is the image that serves as the source of invaluable information. This information is further processed, analyzed, evaluated, and used to make process control decisions.

Let’s elaborate on the advantages of introducing machine vision systems:

Increased production efficiency is achieved by:

Detecting defects earlier in the production process;

increased flexibility of the system in terms of controlling machines and mechanisms;

quick changeover of equipment;

Accounting for raw materials and supplies that are difficult for humans to evaluate.

Visual inspection systems are widely used for quality control of products at all stages of production and, eliminating the human factor, allow for a much more accurate, unbiased and formalized analysis of the object, increasing the number of points and the speed of control.
Thus, machine vision systems make it possible to ensure the production of products in the desired volume with the specified characteristics.
An uncomplicated system can be developed independently. To understand its components and correctly select them, as well as to learn all the necessary information you can from this information block. Read more about machine vision cameras or about choosing a lens in the corresponding articles.

The post Why machine vision systems are needed appeared first on Getbutterfleye.

]]>
Choosing a computer vision camera https://www.getbutterfleye.com/choosing-a-computer-vision-camera/ Tue, 13 Apr 2021 16:42:00 +0000 https://www.getbutterfleye.com/?p=17 Typically, the selection of computer vision system components starts with the camera that will meet all the requirements of the task

The post Choosing a computer vision camera appeared first on Getbutterfleye.

]]>
Typically, the selection of computer vision system components starts with the camera that will meet all the requirements of the task. In this article, analog cameras are not considered due to their lower image quality.

LUCID’s range of machine vision cameras
There is quite a common misconception that using simple and cheap IP or WEB cameras will solve industrial machine vision tasks.
In fact, there are a number of requirements to the cameras of machine vision in any production:
Reliability, build quality, high fault tolerance
High image quality
High sensor sensitivity
Digital lines for connecting external devices
Industry interfaces
SDK – a development tool for various programming languages
Industrial design, water and dust proof, vibration resistance, high temperature range

Network, WEB or machine vision camera?

Image processing cameras are divided into machine vision, network (IP) and WEB-cameras. The main differences are shown in the table below:

If we are talking about solving optical inspection tasks, image processing with specialized software and in general about obtaining reliable image data, it is necessary to use machine vision cameras. Exactly they allow getting quality uncompressed image, necessary for correct work of algorithms of computer vision (cv – computer vision). It is the absence of compression that guarantees full information about the object and distinguishes machine vision cameras from IP and WEB cameras.

But you do not always need all this extensive (and resource-intensive) information – if you do not need high accuracy, and you just need video or images for human surveillance is to pay attention to more budget network (IP) cameras. Web-cameras are usually used for educational purposes to study computer vision at home and in the lab.

Machine vision cameras, on the other hand, are usually used for various tasks in industry, medicine and life sciences, traffic, as well as high-quality video surveillance. A distinction is made between matrix (2D), line (1D), and three-dimensional (3D) cameras.

Matrix cameras

Equipped with a rectangular sensor, which is an array of pixel values. The image is acquired in a single operation.
Linear cameras

Unlike matrix cameras, they have a sensor that contains just one, two, or less often several rows of pixels. Image data is captured line by line, after which the complete image is reconstructed from the individual lines during the processing stage.
They are used everywhere for quality control of products during their movement – sometimes at very high speed (line scanning frequency can reach up to 80 kHz). Line cameras are often used to inspect coiled material (rolled metal, paper, veneer) or products moving on a conveyor belt.
Specialized 3D cameras

These cameras form a three-dimensional point cloud (3D object model) and are used when it is necessary to “see” the shape of the object. Widely used in ITS, as well as for robot control.
PVC-Pipes-Point-Cloud2.png

Smart cameras.

These all-in-one devices have an on-board calculator for on-the-fly image processing. Generally more expensive, less flexible, and generally have worse performance than conventional machine vision cameras + separate calculators. Good for simple tasks where specialized software is not required. Algorithms are usually compiled with the help of constructors based on ready-made templates.
Thus, the answer to the question of what type of camera to choose depends on the application of the camera and the requirements for it.

Two types of sensors can be used in machine vision cameras: CCD and CMOS. They differ both in their design and in the way they operate. In order to select the optimal sensor, it is necessary to first determine the desired range of applications. For example, a CCD sensor will not allow you to use infrared vision, which is necessary if you need to make heat maps of objects.

The advantages of CCD over CMOS are high light sensitivity, better color reproduction, low noise and high dynamic sensitivity. The disadvantages are the complex principle of signal reading, high power consumption and expensive production.

Among the advantages of CMOS (CMOS) sensors are high speed, low power consumption, as well as cheaper and simpler production. The disadvantages are low light sensitivity, pixel fill factor and dynamic sensitivity, as well as high noise levels.

The most common CMOS matrices are from Sony, STARVIS and PREGIUS generations 1-4.

The post Choosing a computer vision camera appeared first on Getbutterfleye.

]]>
Color, resolution and frequency https://www.getbutterfleye.com/color-resolution-and-frequency/ Wed, 24 Feb 2021 16:45:00 +0000 https://www.getbutterfleye.com/?p=23 An image is an array of pixel values, each with its own attribute. A monochrome image has 1 attribute - intensity.

The post Color, resolution and frequency appeared first on Getbutterfleye.

]]>
An image is an array of pixel values, each with its own attribute. A monochrome image has 1 attribute – intensity. A color image has 3 attributes – Red, Green and Blue. Multispectral image has not more than 10 attributes, and hyperspectral – more than 10.

Multispectral or hyperspectral imaging can be done in two ways:

With a specialized camera;
With a monochrome camera and multispectral illumination.
In the process of imaging several images of the same area of the object are formed simultaneously, but in different areas of the electromagnetic spectrum. Various combinations of these images allow to reveal processes and phenomena which are difficult or impossible to detect in the image taken with a color camera, because not only the visible range can be used as a spectrum, but also, for example, the infrared (IR) or ultraviolet (UV) range.

R-spectrum RG-spectrum RGB-spectrum
A color image taken with a monochrome camera and multispectral (RGB) illumination is better for software processing than a color image taken with a color camera

A normal camera most often works in the visible and near infrared range – from 350 to 1000 nm. Check the technical documentation for the exact spectral bandwidth of your camera.

LUCID Triton 2.8 MP camera bandwidth from 320 to 1100 nm (monochrome and color)

At the end it is necessary to understand, what kind of image is required for the task – should it be color to estimate the required characteristic, spectral to reveal complex parameters, or monochrome enough?

If color is not a prerequisite, we recommend opting for a monochrome camera, as they have higher sensitivity and spatial resolution.

What should the image resolution, frame rate and color depth be?
Is it always necessary to have a high resolution?

Keep in mind that the higher the resolution, the larger the data volume and the slower the system speed. To determine what resolution is needed for your particular task, you should perform simple mathematical calculations:

This formula uses the following designations: S – size of the control object, a – the minimum significant size of the object. Coefficient k affects how many pixels should account for the minimally significant size of the object in order to distinguish its borders programmatically. Usually for a monochrome camera the minimum size is 3 to 5 pixels. For an ordinary color camera this coefficient is increased by 2 times.

Let us show an example: you need to detect a defect on a part with a size of 400×300 mm. The minimum size of the defect is 0.65×0.65 mm. Then the required axis resolution will be:

Now, using the obtained values, it is necessary to choose a camera that has the same or slightly higher resolution. The camera Lucid Triton 2.8 MP Model (1936 x 1464 pixels) is suitable for this task.

In some cases, for example for measuring the geometry of objects, it is possible to use sub-pixel resolution instead of buying an expensive camera with higher resolution. In addition, if your algorithms are based on neural networks, look carefully at the requirements for images – they often need to be no more than 2-3 MP.

Image size is also important when selecting calculators and communications (cables, network adapters, etc.) in multi-camera systems.

Frame rate and exposure

The frame rate (in the case of line cameras, the term “line scan rate” is used) and exposure should not be confused. Frame rate refers to the number of frames a sensor can capture and transmit in one second. The higher the frame rate, the greater the amount of data transmitted, and the more processing power required to process it.

Exposure – the interval of time during which the camera shutter is open and light hits the photosensitive area of the camera sensor.

In the case, for example, of monitoring objects moving along a conveyor belt at a very high speed, the exposure must be literally milliseconds in order to ensure that the object in the image is not blurred. However, it must be considered that the exposure must not only be short, but also long enough to produce an image at all. Exposure that is too long will cause “over-exposure”, and exposure that is too short will cause “under-exposure”. The minimum exposure value that can be provided by the camera is 40 µs. If an even shorter exposure is required, pulse illumination must be used.

Color depth (image resolution) is a term that refers to the degree of detail in light transmission. The human eye can perceive about 17 million shades of color or 256 grayscale.

1 digit: 21 = 2 colors. A binary image is most often represented by black and white. Shades are represented by the density of black dots: the “more often” black dots are placed, the darker the color.

8 digits: 28 = 256 grayscale monochrome or 2R8∙2G8∙2B8 = 16.78 million color shades.

8-bit grayscale 8-bit color image
In addition there are 10-bit, 12-bit and even 16-bit images. To the human eye this image is no different from an 8-bit image. However, machine processing capabilities allow you to work with images of higher digit capacity. This provides analysis and specified accuracy of mathematical calculations.

The post Color, resolution and frequency appeared first on Getbutterfleye.

]]>