Wednesday, January 25, 2012

Vision and Multisensor Inspection Goes Mainstream


                Software advances make vision and multisensor technology an everyday tool for inspecting and analyzing mechanical components.
              Miniaturization and advanced materials are cutting costs and improving the utility of all sorts of mechanical and electromechanical products. Examples include handheld digital devices, medical implants, miniature plastic-gear drives, diesel-fuel injectors, and compressor blades. Manufacturing engineers are thereby looking for ways to measure and analyze such components quickly and accurately during product development and production.
               The exclusive use of contact-inspection systems is no longer an option for many kinds of parts. Conventional CMM probes, for example, even with 1-mm tips, cannot access small blind holes or tiny features. In other instances, complex geometries prevent probes from reaching critical points. In addition, soft, pliable, and dual-durometer materials that easily deform, as well as mirror finishes that may be damaged by contact, also make poor candidates for tactile inspection.
             In the past, when these sorts of components were a rarity, measuring microscopes were a suitable choice. However, the increasing number of parts with small and inaccessible features along with requirements in some industries for 100% inspection have turned microscope inspection into part-validation bottlenecks. 
              Fortunately, current vision and multisensor systems, which might include devices such as microprobes, laser scanners, and chromatic white lights, let users rapidly collect vast amounts of dimensional information for design analyses and subsequent part validation. The systems use CAD-based programming and inspection software to operate in 2, 2.5, and 3D modes, collecting data that is useful not only for validating dimensions, but also for analyzing designs and manufacturing processes.
                During the past five years, inspection-equipment developers have invested a lot of time in developing software that includes proprietary algorithms for accurately capturing images and transforming them into discrete data points that can be automatically compared to nominals in CAD models. These efforts have pushed vision and multisensor equipment onto the shop floor and away from the dedicated inspection laboratory. The advanced systems are as easy to use as a typical CMM.
Algorithms augment optics
                   A big barrier to the primary use of vision and multisensor devices in advanced metrology has been the perception that adjusting systems for appropriate lighting, contrast, and edge-detection sensitivity took specialized knowledge beyond that of average users. While this once may have been true, it is no longer the case. Many powerful new software algorithms effectively automate these important adjustments to provide consistent inspections from part to part and one vision machine to another.
               A legitimate concern has been the subjectivity of making manual adjustments to set contrast. Optimized contrast substantially improves inspection accuracy by improving the vision system’s capability to detect edges and compensate for the tendency of light to bend around the edges of cylindrical surfaces, thereby shortening measured distances. Today, special algorithms automate the adjustment of contrast levels. At the touch of a button, the algorithm makes a series of rapid iterative adjustments until it reaches the best contrast.
                Also, differences in light sources (for example, halogen or LED) used to illuminate parts and ambient lighting in different locations was another source of vision-measurement variability. However, it is now straightforward to correct for these variations. Current inspection software lets users compensate for these effects just as they would calibrate a probe on a CMM.
            Additionally, because camera probes do not touch the edge they are measuring, edge detection must rely on the accurate interpretation of the data the vision software receives from the camera. Advanced vision-inspection software can fine-tune algorithms to account for both the part surface and illumination. This lets the software accurately find each feature edge.
                Generally, inspection software uses a dominant-edge algorithm to select the edge of a part —especially when using a device containing built-in illumination — and this approach works well. But when measuring top-lit parts with a high-surface finish, this method is problematical. In these cases, a specific-edge algorithm is preferable. It detects features of interest based on contrast, shape, and location. Another example: Grind marks on the part might confuse a camera using top lighting. Here, the software might apply another type of algorithm that chooses the most dominant edge out of possible candidates in the camera’s field of view.









we are science and technology