Pontificia Universidad Católica del Perú Escuela de Posgrado Tesis de Maestría “Image acquisition and processing for multi-channel spectral imaging” Para obtener el grado de: Master of Science (M.Sc.) en Ingeniería Mecatrónica Presentado por: Mario Eduardo Zárate Cáceres Fecha y lugar de nacimiento: 08/10/1990 Cusco, Peru Cod. PUCP: 20144225 Departamento (TU Ilmenau): Control de Calidad y Procesamiento de Imagenes Industriales Tutor Responsable (TU Ilmenau): Dipl.-Wirtsch.-lng. Edgar Reetz Tutor Responsable (TU Ilmenau): Dr.-Ing. Martin Correns Profesor Responsable (TU Ilmenau): Univ.-Prof. Dr. rer. nat Gunther Notni Profesor Responsable (PUCP): M.Sc. Ericka Madrid Ruiz Fecha y Lugar: 25 de Abril, Lima Selbstständigkeitserklärung Hiermit versichere ich, dass ich die vorliegende Masterarbeit selbstständig ohne Be- nutzung anderer als der angegebenen Quellen angefertigt habe. Alle Stellen, die wörtlich oder sinngemäß aus veröffentlichten Quellen entnommen wurden, habe ich als solche kenntlich gemacht. Die Stellen der arbeit, die dem Worlaut oder dem Sinn nach anderen Werken (dazu zählen auch Internetquellen) entnommen sind, wurden unter Angabe der Quelle ken- ntlich gemacht. Ilmenau, 25th April, 2016 ................................................ Mario Eduardo Zárate Cáceres Acknowledgment The accomplishment of this Master Thesis could not have been performed without assis- tance of my advisors, discipline and hard-work, I appreciate to Univ.-Prof.Dr.rer.nat. Gunther Notni for allowing me to work in his research area, similarly I really would like extend my gratitude foremost Dipl.-Wirtsch.-lng. Edgar Reetz and Dr.-Ing. Martin Correns for mentoring me over the course of master studies. I sincerely thank them for their confidence in me. Additionally, I would like to extend my gratitude to Prof.-Dr.- Ing. Tom Ströhla and Prof. Benjamin Barriga, the responsible professors for the doble degree program who made this cooperation between Technische Universität Ilmenau and Pontificia Universidad Católica del Perú possible. I would also like to extend my grate- fulness to Prof.-M.Sc. Ericka Madrid for her support from Peru. This work would not have been possible without the support of CONCYTEC and my country, which has believed in me to accomplish this goal. I am genuinely thankful to Peru’s government to encourage young people and help them to grow professionally. This scholarship gives students the possibility to improve their knowledge and develop a shared cooperation among peruvian and german students. Me gustaría agradecer de manera muy especial a mis padres Mario y Rocío, sin los cuales nada de esto seria posible, por su apoyo incondicional a pesar del tiempo y la distancia, también a mis hermanos Fabiola, Ernesto y Ricardo, los cuales aún tienen un largo camino que recorrer, los adoro. A toda mi familia y a mis amigos en Perú, gracias por su compañia y amistad durante todo este tiempo. Del mismo modo quiero agradecer a la virgen Maria Auxiliadora, pues ella lo ha hecho todo ... Abschließend möchte ich meine peruanischen Freunde in Deutschland in meinen Dank einbeziehen. Auch möchte ich meinen deutschen Freunden danken, dafür, dass sie mir bei diesem Prozess geholfen haben. Abstract Nowadays spectrometers are useful in many applications such as bio-medical technol- ogy among other industrial fields. State of the art low-cost spectrometers are usually equipped with linear photoelectric array detectors (line detectors). This thesis cov- ers part of the software development for a low-cost multichannel spectrometer using a matrix detector instead of a linear array detector. The image acquisition including an automatic integration time optimizer was implemented. Furthermore, algorithms for data extraction and calibration were developed. Finally, the multi-channel-system with the new software was compared with a high-resolution spectrometer and the results were discussed. Zusammenfassung Heutzutage Spektrometer nützlich in vielen Anwendungen, wie in biomedizinischen und anderen industriellen Bereichen. Der Stand der Technik der preiswerten Spek- trometer sind mit linearen photoelektrischen Array Detektoren (Zeilendetektoren) aus- gestattet. Diese Masterarbeit behandelt einen Teil der Softwareentwicklung für ein low-cost Mehrkanal-Spektrometer, welches mit einem Matrix-Detektor anstelle eines Zeilendetektor arbeitet. Die Bildaufnahme, die einen automatischen Optimierer der Integrationszeit miteinschließt, wurde eingebaut. Desweiteren wurden Algorithmen für die Datengewinnung und Kalibrierung entwickelt. Schließlich wird das Mehrkanal- System und die neue Software mit einem hochaufgelösenden Spektrometer verglichen und die Ergebnisse wurden diskutiert. Contents List of Tables VI List of Figures VII Abbreviations and Symbols X 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Objective of this thesis work . . . . . . . . . . . . . . . . . . . . . . . . 3 2 State of the Art 4 3 Theoretical Considerations 8 3.1 Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.1 Visible light range . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2 Diffraction gratings . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.3 Czerny-Turner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.4 Concave Holographic . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Optical fiber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4 Exposure time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.5 Hot Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.6 Optical Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.6.1 Dichroic filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.6.2 Full width at half maximum . . . . . . . . . . . . . . . . . . . . 16 4 Implementation 17 4.1 Image acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Contents V 4.2 Finding orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.3 Image decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.4 Calibrating the wavelength . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.5 Cropping channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.6 Computing Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.6.1 Arithmetic Mean . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.6.2 Weighted Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.6.3 Quadratic Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.6.4 Weighted Gaussian mean . . . . . . . . . . . . . . . . . . . . . . 52 4.7 Resizing spectrum per channel . . . . . . . . . . . . . . . . . . . . . . . 55 5 Experimental results 58 5.1 Calibration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.2 Displaying results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.3 Standard spectrometer as reference . . . . . . . . . . . . . . . . . . . . 60 6 Discussion 64 6.1 Validation of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.2 Considerations and limitations . . . . . . . . . . . . . . . . . . . . . . . 69 7 Summary and Outlook 72 Bibliography 73 Appendix A Datasheet sensor EV76C560 82 Appendix B Optical Filters Carl Zeiss 83 Appendix C LED tests 84 Appendix D Laser diodes 87 Appendix E Dyomics Dye Markers 88 Appendix F Code: Find rotation 91 Appendix G Code: Create floating points 92 Appendix H Code: Check Test Line 93 Appendix I Code: Crop channels 94 Appendix J Code: Wavelength calibration 95 Master Thesis of Mario Eduardo Zárate Cáceres List of Tables 4.1 “Channels.data”, first calibration file with channel key points . . . . . . 39 4.2 “ValProChannels.data”, calibration file with wavelegth key points . . . 45 4.3 Position test of wavelength in channel 1 (λ = F (p)) . . . . . . . . . . . 46 4.4 Wavelength range in channels . . . . . . . . . . . . . . . . . . . . . . . 47 4.5 “scale.data”, calibration file which contains factor f to rescale channels 57 5.1 Characteristics of both spectrometers . . . . . . . . . . . . . . . . . . . 62 List of Figures 2.1 Typical spectral imaging approaches . . . . . . . . . . . . . . . . . . . 5 2.2 Multi-point Spectrometer SPECIM . . . . . . . . . . . . . . . . . . . . 7 3.1 Visible Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Diffraction Grating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Crossed Czerny-Turner Spectrograph . . . . . . . . . . . . . . . . . . . 11 3.4 Unfolded Czerny-Turner Spectrograph . . . . . . . . . . . . . . . . . . 12 3.5 Concave-Holographic Spectrograph . . . . . . . . . . . . . . . . . . . . 13 3.6 Full width at half maximum . . . . . . . . . . . . . . . . . . . . . . . . 16 4.1 Structure of spectrometer . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 System structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3 xiCOP - Ximea Control panel . . . . . . . . . . . . . . . . . . . . . . . 19 4.4 Image sensor Ximea, the photoelectric matrix sensor . . . . . . . . . . 19 4.5 Comparison between white light and incandescent light bulb . . . . . . 20 4.6 System response to Exposure Time . . . . . . . . . . . . . . . . . . . . 21 4.7 Photoelectric matrix sensor feedback system . . . . . . . . . . . . . . . 21 4.8 Exposure time in time domain . . . . . . . . . . . . . . . . . . . . . . . 23 4.9 Response versus time domain of Auto Exposure time . . . . . . . . . . 23 4.10 Flowchart of Auto-Exposure Time . . . . . . . . . . . . . . . . . . . . 25 4.11 Newly acquired image from sensor using LED white light . . . . . . . . 26 4.12 Histogram of white light source and background with tET = 620 ms . . 27 4.13 Finding orientation process . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.14 Reference image and background . . . . . . . . . . . . . . . . . . . . . 31 4.15 Data from random vertical line . . . . . . . . . . . . . . . . . . . . . . 32 4.16 Test line and Linear interpolation . . . . . . . . . . . . . . . . . . . . . 32 4.17 Interpolated points along one random line in image . . . . . . . . . . . 33 4.18 Intensity values along test line, zoom from figure 4.17 . . . . . . . . . 34 List of Figures VIII 4.19 dz = f ′(x′), peaks in the inspection line . . . . . . . . . . . . . . . . . . 35 4.20 Image and background . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.21 Top and bottom detection . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.22 Found points along random test line . . . . . . . . . . . . . . . . . . . 37 4.23 Linear regression in channel . . . . . . . . . . . . . . . . . . . . . . . . 38 4.24 Laser diode for calibration . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.25 Red and Green Laser Diode . . . . . . . . . . . . . . . . . . . . . . . . 41 4.26 Green Laserdiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.27 Optical fiber multichannel slit-laboratory . . . . . . . . . . . . . . . . . 42 4.28 Image processing of each isolated spot . . . . . . . . . . . . . . . . . . 43 4.29 Position Detection of points . . . . . . . . . . . . . . . . . . . . . . . . 43 4.30 Wavelength Position lines . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.31 Distance between intersection points and channels . . . . . . . . . . . . 45 4.32 Wavelength position in Channel 1 . . . . . . . . . . . . . . . . . . . . . 46 4.33 Cropping channels from image . . . . . . . . . . . . . . . . . . . . . . . 48 4.34 New created matrix (mat6) from channel 6 . . . . . . . . . . . . . . . . 49 4.35 Wavelenght data in one channel . . . . . . . . . . . . . . . . . . . . . . 50 4.36 Data in channel 6, layer at x′ = 600 . . . . . . . . . . . . . . . . . . . 50 4.37 Weight in each slide per channel . . . . . . . . . . . . . . . . . . . . . . 51 4.38 Gaussian mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.39 Channel 6 and its values, x′ = 600 . . . . . . . . . . . . . . . . . . . . 53 4.40 White light LED spectrum in channel 6 . . . . . . . . . . . . . . . . . . 54 4.41 Channels according Gaussian Mean . . . . . . . . . . . . . . . . . . . . 54 4.42 Rescaled channels according Gaussian Mean . . . . . . . . . . . . . . . 55 4.43 Error between original and rescaled image . . . . . . . . . . . . . . . . 56 4.44 Kiviat diagram shows the error per channel and method . . . . . . . . 57 5.1 Light source for measurements . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 Spectra of 11 channels using LED white light . . . . . . . . . . . . . . 60 5.3 HR2000+CG-UV-NIR Spectrometer . . . . . . . . . . . . . . . . . . . 61 5.4 Ocean Optics Spectra Suite . . . . . . . . . . . . . . . . . . . . . . . . 62 5.5 CCD array and CMOS matrix . . . . . . . . . . . . . . . . . . . . . . . 63 6.1 Spectra from white light source are compared with a reference . . . . . 65 6.2 White light spectra from 11 channels . . . . . . . . . . . . . . . . . . . 65 6.3 (Red + Green + Blue) Led spectra from 11 channels . . . . . . . . . . 66 6.4 Green laser diode (532 nm) spectra from 11 channels . . . . . . . . . . 67 6.5 Dyomics Dye Marker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.6 Absorption/emission test using Dynomics Dye Marker . . . . . . . . . . 68 Master Thesis of Mario Eduardo Zárate Cáceres List of Figures IX 6.7 Light distribution using HG-1 Mercury Argon . . . . . . . . . . . . . . 69 6.8 Acquired image using calibration light source, HG-1 Mercury Argon lamp 70 6.9 Mechanical assembly of image sensor . . . . . . . . . . . . . . . . . . . 70 6.10 LEDs disposition in Ximea sensor . . . . . . . . . . . . . . . . . . . . . 71 6.11 Background spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Master Thesis of Mario Eduardo Zárate Cáceres Abbreviations and Symbols ADC Analog to Digital Converter AE Auto Exposure API Application Programming Interface CMOS Complementary Metal Oxide Semiconductor CCD Charge-Coupled Device tET Exposure Time, ms e.g. exampli gratia, for example FPS Frames per Second FWHM Full Width at Half Maximum FOV Field of View GmbH “Gesellschaft mit beschränkter Haftung” or “company with limited liability” in english i.e. id est, that is JPEG Joint Photographic Experts Group, method of lossy compression for digital images LED Light Emitting Diode µs Microsecond, 1x10−6 s nm Nanometer, 1x10−9 m NIR Near Infra Red PNG Portable Network Graphics ROI Region of Interest/Inspection RMS Root Mean Square USB Universal Serial Bus UV Ultraviolet light, < 380 nm VIS Visible Spectrum, 390 nm − 700 nm λ Wavelength, nm Chapter 1 Introduction 1.1 Motivation Spectral imaging, which is used in bio-medical applications and industrial fields, has been increased due to development in optical spectroscopy. In addition, one of its advantages is to be a non-destructive method, thus the measurement can be repeated without altering the sample. Thereby, it has the idea that a wide spread field of applications for spectrometers exists. Nevertheless, the market limits the range of ap- plications since they can not keep up with targeted cost requirements for consumer products. Furthermore, a typical spectrometer includes several disadvantages that prohibit a successful application as a hand held device, in field application and spe- cially in low cost scenarios. A single item production system limits the possible lowest production cost as well. The aim is to use a photoelectric matrix detector, instead of the conventional linear photoelectric sensor, and take advantage of image sensors in order to make a multi- channel spectrometer which enhances robustness for field application and at the same time show a high cost efficiency. Upgrading such a system using high resolution matrix detectors provides new possibilities for applications. Moreover, using matrix detectors can have redundancy to compute spectra. This can be used to validate and compensate corrupted data. Additionally, it gives the possibility of displaying many spectra at the same time. These features will address new areas of application and it expects that it will become a consumer product. As the aim is to extend the range of applications, the spectrometer will not be used only as a lab-device. Therefore, certain important factors have to be considered, such as mechanical influences or environmental lighting. Spectral imaging has applications in many fields and could be used in: 1 Introduction 2 • Fluorescence measurement, emission spectrum of fluorescent material, such as fluorescent lamp or dye markers • Environmental analysis of water solid and the like • Perform spectral measurement and inspect LEDs or the like • Film thickness measurement, white light interferometry is used to measure the spectrum peak count, film refractive index and film thickness from light incidence angle • Pharmaceuticals, medicine, agriculture and biological engineering, among others 1.2 Problem The first hurdle to overcome is to acquire an image that is good enough to recognize information, some problems with external light, clearance produced during mechanical assembly among others should be resolved; image sensor features are also important. Once the image was digitalized, it must find a spectra orientation. Then, the channels will be easily decomposed only depending on the space among the channels that can be detected. Besides that channels are blurred, also some of them could have crosstalk. Thus one type of light source should be proposed to guarantee a suitable decomposition. When the channels are segmented, in order to find a correct light distribution, some procedures will be required to compute spectra, as initially the accurate position of wavelength is unknown. Finally the data should be computed considering the matrix information and producing spectra in 2D axis, (wavelenght [nm] vs high [counts]). Additionally, the energy lost through optical fiber have to be offset in order to display reliable and useful results. Time consumption is another challenge, as the the run-time would take several sec- onds, some steps, which belong to calibration process, produce calibration files. This would improve time consumption in next assessments. Since the spectrometer user will have different environmental conditions, a standardization process will be needed to yield the same results in each test. Master Thesis of Mario Eduardo Zárate Cáceres 1 Introduction 3 1.3 Objective of this thesis work Using matrix sensor enables to convert a single channel into a multi-channel spectrom- eter. The initial task is to acquire an image from a photoelectric matrix sensor. This image should be good enough to yield data whose values will be used to compute spec- tra. Then, image processing methods have to be applied to detect each channel and extract the data. In order to implement algorithms using C/C++ and image processing libraries the following tasks will be accomplished: • Optimize the image acquisition for better signal-to-noise ratio • Automatically find and parameterize image regions containing spectral informa- tion (decomposition) in the image plane • Extract data to compute spectra from the present image data and display them • Examine methods to calibrate the multi-channel spectrometer and produce reli- able results, develop a calibration/correction method which considers the warped and distorted image data caused by mechanic influences Master Thesis of Mario Eduardo Zárate Cáceres Chapter 2 State of the Art The combination of imaging and spectroscopy is known as spectral imaging. The first one provides the amount of light at every pixel I(x, y), whereas a typical spectrometer provides a single spectrum I(λ). The acquisition of a spectral images requires splitting a light beam into its wavelength. The spectral methods can be divided first: the wavelength-scan that measure for the images one wavelength at a time using a variable filter, a set of filters, a liquid crystal variable filter or an acousto-optic variable filter. Another method is the spatial-scan that measures the whole spectrum, but only a portion of the image at a time. It uses a dispersion element, either a grating or prism. A third method is the time-scan which is based on measuring data that is a superposition of the spectral or spatial information. Therefore, it requires a transformation of the acquired data to derive a spectral image, one of these methods is Fourier spectroscopy [GYM07]. When only a 2D array detector is used (e.g. CCD array sensor), only the spectrum of a single-point is acquired. This process should be repeated along one line if it is desired to get a line (see figure 2.1a). Using a matrix-detector could acquire spectra of many points which could be in a line (see figure 2.1b). Considering methods explained above, multispectral imaging does not create the spectrum of light source, but acquires different images in certain wavelenghts using filters (see figure 2.1c). Hyperspectral imaging acquire image in narrow spectral bands creating data cube information (see figure 2.1d). One considers I(x, y, λ) as either a collection of many images in which each one is measured at a different wavelength or as a collection of many spectral values at each pixel. Cameras with this kind of imaging were considered as handicraft manufactured, fragile, price, bulky, not customizable and slow. Nowadays most of these features have been enhanced [GYM07]. 2 State of the Art 5 Figure 2.1: Typical spectral imaging approaches (a) Points-scanning with a linear array detector (b) Line-scanning using a matrix-detector (c) Multispectral imaging using a band se- quential method (d) Single-shot, the snapshot mode can acquire a complete spectral data cube in a single integration [LHW+13] In addition, a spectrometer is a device to measure spectra. There exists different types of spectrometers depending on the target. A multi-channel spectrometer can be used to get spectra from different objects or light sources. It uses spatial-scan method where light is diffracted by grating. If many spectra are acquired by a matrix sensor, it will be possible to upgrade a single-channel to a multi-channel spectrometer. Nevertheless it will be done only with a software development, neither new optical system nor new hardware setup are proposed. There is a wide range of spectrometers which are manufactured, for instance, by Ocean Optics [Opt15c], Thorlabs [Tho14], among others. These are considered as conventional spectrometer. All of them are single-channel. They use photoelectric line detector and have high accuracy. Their price varies between 5 000e and 10 000e, the aim is to achieve same results with low-cost devices which price are below 1 000e. Lately, a new tendency exists to miniaturized spectrometer to expand its use for new field-applications and assessment in-situ. Current mini-spectrometer are single- channel as well. They have many advantages such as fast read-out speed, good dynamic range and signal quality. Three major technologies exists for the miniaturization, using micro mechanical system (MEMS), through modern interference filter and the classic diffraction gratings. If the last ones are optimized, they could achieve competitive results. Master Thesis of Mario Eduardo Zárate Cáceres 2 State of the Art 6 In the industry many types of mini-spectrometer exist, some of them are even called micro-spectrometers, with high sensitivity and wide wavelength region, but they require external circuit to work [Ham15b]. Other products are plug&play and have a resolution of 1.2 nm [Las15b]. Another similar spectrometer is shown in [Dev15]. Both use optical fibers to lead light, but again, these are only single-channel. A light manufacturer developed a mini-spectrometer which works in a spectral range 340 − 750 nm, high sensitivity and precise calibration focused on the light measurement [Nor15]. Ocean Optics also has its miniaturized spectrometer version [Opt15b]. It covers not only 350 − 800 nm, but also Ultra Violet (UV) and Near Infra Red (NIR). All the existing mini-spectrometer are single channel, even commercial spectrome- ters are single-channel. Mightex, an American brand, under the same concept have joined many individual spectrometers in one device to gain up to six channels, increas- ing the price, but providing until six light entrances for measuring. Mightex offers to the market a compact multi-channel fiber spectrometer that features high spectral resolution and high light throughput. The spectrum of each channel is dispersed by a high-efficiency diffraction grating and then imaged onto a 2D CCD sensor. Light from each channel occupies different rows on the CCD sensor. All channels are exposed simultaneously, then rows associated with each channel are binned together to produce a spectrum for the channel. The fiber channels are spaced out properly to essentially eliminate crosstalk between adjacent channels. The standard CCD camera features a 1.3 MP Sony ICX205 imager with a 12-bit ADC [Mig15]. Specim spectral imaging, finnish brand, found the way of convert a hyper-spectral camera to a multi-point spec- trometer. The maximum number of fibres per camera is determined by the spatial dimension of the detector array and diameter of the fibre used. It has a standard configuration from 4 to 40 spectral channels, with customized optical fibres up to 100 channels [SPE16] (see figure 2.2). Both Mightex and Specim are not providing low-cost devices. There are not many information concerning a mini spectrometer with multi-channel spectra. This new approach could be applied for hand held or field applications [RCN15]. Some other approaches were proposed, for instance, using smart-phones, taking advantage of cameras, and a nano imprinted diffraction grating, in pursuit of low-cost and field portable spectra analysis as well [HCCJ15]. A mini-spectrometer was used in bio-medical applications for rapid screening of skin cancer [DSW+15]. Al- though these are single-channel spectrometers, a cost reduction was pursued. On the other hand, parallel optical tomography in a multi-channels spectrometer that allows simultaneous acquisition of up to eight channels and near-infrared spectrometer for functional depth-resolved tissue examinations was developed [EPT+14]. Master Thesis of Mario Eduardo Zárate Cáceres 2 State of the Art 7 (a) (b) Figure 2.2: Multi-point Spectrometer SPECIM (a) Multiple optical fibre inputs, it turns the hyper- spectral camera to a multiple-point spectrometer (b) Multi-point inspection in a indus- trial process [SPE16] Some researchers tried to enhance spectrometers adding inputs. One approach was done using an array interferometer trough Fourier spectrometer, which is resistant to mechanical and climatic conditions [MATN06]. Another scientific group proposed the use of monolithic miniature spectral sensor that was developed for multi-channel in 2006 [RMB+06]. In addition, using folded gratings was possible to get nine spectra in a range of 300 − 700 nm. A processing system based on a commercial available ICCD camera was used (1024 x 1024) [BQGZ11]. The use of a diffraction grating can reduce cost. Due to its flexibility, efficiency and good resolution, it reduces the number of elements, but limits the optical design in correcting imaging error, like field curvature. This new approach should be robust and low-cost to enter successfully into market. Hence it is desired to reduce or elimi- nate disadvantages of grating putting a higher afford on developing software to solve some deviations which could appear. A small multi-channel spectrometer using matrix sensor with eleven channels fulfilling the feature of low-cost requirement has not been developed yet. Master Thesis of Mario Eduardo Zárate Cáceres Chapter 3 Theoretical Considerations 3.1 Imaging The process of acquiring spatial and temporal data information from objects receives the name of imaging. Nowadays exist two major sensing circuits which are based either on CCD (charge-coupled device) or on CMOS (complementary metal oxide semicon- ductor) technology. The quality of an image determines the amount of information that can be extracted. The sensor can be either a linear array 1D or a matrix array 2D. Since the matrix sensor is used, image processing becomes possible. Image process- ing is operate images using mathematics operations, the output of image processing may be either an image or a set of characteristic or parameters related to he image [GW08]. Thus it can detect and compensate some geometric deformation by algorithm. Furthermore read out several spectra simultaneously through image segmentation, the image contains many channels, so subdividing an image into a number of uniformly homogeneous regions, besides each homogeneous region is a constituent part the entire image [AR05], or use many pixels to compensate the lesser sensitivity that this matrix sensor has in contrast to linear detector. The acquired images are characterized for [GYM07], (1) Spatial resolution determines the closest distinguishable features. It depends mainly of the size and the number pixel of the image detector. Spatial resolution also depends on the signal quality. (2) The lowest detectable signal depends on the quantum efficiency of the detector (the higher the better), the noise level of the system (the lower the better), the numerical aperture 3 Theoretical Considerations 9 of the optics (the higher the better), and the quality of the optics. (3) Dynamic range of the acquired data determines the number of different intensity levels that can be detected. (4) Field of View (FOV) determines the maximal area that can be imaged. (5) Other parameters include the exposure time range. 3.2 Spectroscopy Spectroscopy is known as dispersion of electromagnetic radiation using different meth- ods. Exist some physical effects that make it possible, for instance, interference, absorp- tion, diffraction and refraction. The instrument used to measure is named as optical spectrometer (spectrophotometer, spectrograph or spectroscope) [Wik16e]. Isaac New- ton made experiments using a prism to split the light in 1666, being this the first known spectrometer [BM12]. Joseph von Fraunhofer replaced a prism with a diffraction grat- ing as the source of wavelength dispersion in 1821. The electromagnetic spectrum have different range, but only one range is visible by human eyes, known as visible range, ultraviolet and infrared rays are the closer range, but they are no visible, although these can be detected by spectrometers. Nowadays the most widely used spectroscopic technique for studying liquids and gases due to its simplicity, accuracy and ease of use is absorbance. It can be used as a qualitative tool to identify substances, or as measure the concentration of a molecule in solution. It has the advantage of being a non-destructive method [Opt16a]. 3.2.1 Visible light range A part of the electromagnetic spectrum of light which can be observed by the human eye is known as visible light or visible spectrum, its range is between 390 nm to 700 nm [SES10]. Some animals can see light with frequencies outside the visible range; which is shown in figure 3.1. Figure 3.1: Visible Spectrum [Kai06] Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 10 Spectral imaging combines spectroscopy and imaging, two well-known scientific method- ologies, to provide a new advantageous tool. Spectral imaging technology has been originally substantiated within remote sensing fields, such as airborne surveillance or satellite imaging, and has been successfully applied to mining and geology, agriculture, military, environmental and global change research [Goe09]. 3.2.2 Diffraction gratings In optics, the optical element, which splits and diffract light into several beams, is known as diffraction grating. It acts as a dispersive element, the directions of these beams depend on the spacing of the grating and the wavelength of the light. Because of this, gratings are commonly used in monochromators and spectrometers as is shown in figure 3.2. The principles of diffraction gratings were discovered by James Gregory, about a year after Newton’s prism experiments, initially with items such as bird feathers. The first man-made diffraction grating was made around 1785 by Philadelphia inventor David Rittenhouse, who strung hairs between two finely threaded screws. This was similar to notable German physicist Joseph von Fraunhofer’s wire diffraction grating in 1821 [Wik16c]. Figure 3.2: Diffraction Grating Spectrum [Tub11] There are two types of diffraction gratings: ruled gratings and holographic gratings. Ruled gratings are created by etching a large number of parallel grooves onto the surface of a substrate, then coating it with a highly reflective material. Holographic gratings, on the other hand, are created by interfering two UV beams to create a sinusoidal index of refraction variation in a piece of optical glass. This process results in a much more uniform spectral response, but a much lower overall efficiency. Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 11 While ruled gratings are the simplest and least expensive gratings to manufacture, they exhibit much more stray light. This is due to surface imperfections and other errors in the groove period. Thus, for spectroscopic applications (such as UV spec- troscopy) where the detector response is poorer and the optics are suffering more loss, holographic gratings are generally selected to improve the stray light performance of the spectrometer. Another advantage of holographic gratings is that they are easily formed on concave surfaces, allowing them to function as both the dispersive element and focusing optic at the same time [BWT15b]. The slit, the grating, and the detector works together with different optical com- ponents to form a complete system. This system is typically referred to as the spec- trograph, or optical bench. While there are many different possible optical bench configurations, the three most common types are the crossed Czerny-Turner, unfolded Czerny-Turner, and concave holographic spectrographs (shown in figure 3.3, 3.4 and 3.5 respectively). 3.2.3 Czerny-Turner The crossed Czerny-Turner configuration has two concave mirrors and one plane diffrac- tion grating, as illustrated in figure 3.3. The focal length of mirror 1 is selected such that it collimates the light emitted from the entrance slit and directs the collimated beam of light onto the diffraction grating. Once the light has been diffracted and sepa- rated into its chromatic components, mirror 2 is then used to focus the dispersed light from the grating onto the detector plane. Figure 3.3: Crossed Czerny-Turner Spectrograph [BWT15c] Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 12 By optimizing the geometry of the configuration, the crossed Czerny-Turner spec- trograph may provide a flattened spectral field and good coma correction. However, due to its off-axis geometry, the Czerny-Turner optical bench exhibits a large image aberration, which may broaden the image width of the entrance slit by a few tens of microns. Thus, the Czerny-Turner optical bench is mainly used for low to medium resolution spectrometers [BWT15c]. Czerny-Turner optical benches cause a fairly high level of stray light. One simple and cost-effective way to mitigate this issue is by unfolding the optical bench as shown in figure 3.4 below. This allows for the insertion of “beam blocks” into the optical path, greatly reducing the stray light and, as a result, the optical noise in the system. This issue is not as damaging in the visible and NIR regions where there is an abundance of signal and higher quantum efficiencies, but it can be a problem for dealing with medium to low light level UV applications. This makes the unfolded Czerny-Turner spectrograph ideal for UV applications that require a compact form factor [BWT15c]. Figure 3.4: Unfolded Czerny-Turner Spectrograph [BWT15c] 3.2.4 Concave Holographic The third most common optical bench is based on an aberration corrected concave holographic grating (CHG). Here, the concave grating is used both as the dispersive and focusing element, which in turn means that the number of optical elements is reduced. This increases throughput and efficiency of the spectrograph, thus making it higher in throughput and more rugged. The holographic grating technology permits correction of all image aberrations present in spherical, mirror based Czerny-Turner spectrometers at one wavelength, with good mitigation over a wide wavelength range [BWT15c]. Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 13 Figure 3.5: Concave-Holographic Spectrograph [BWT15c] 3.3 Optical fiber An optical fiber is a flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair [Ass16]. These are normally use to transmit light between two ends of the fiber and are used widely in fiber-optic communications. Fibers are also used for illumination and are wrapped in bundles so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of a fiber-scope [Oly16]. It also is used widely as a medium for telecommuni- cation and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. A fiber optic can be compared with “water pipe”, it directs water from one location to another by guiding it through twists and turns to the desired location, the fiber optic optics guide light waves in a similar fashion. It will lead light into a spectrom- eter or other optical detection system. This is achieved by a process known as total internal reflection. The choosing of optical fiber has two key factor: core diameter and absorption. 1. Core diameter, since all of the light in a fiber optic is collected in the core, the diameter of the core directly correlates to the amount of light that can be transmitted. Based on this principle, it would seem intuitive that a larger core diameter will improve the sensitivity and signal-to-noise ratio of a spectrometer. While this is true to a certain extent, there are other limiting factors that need to be considered when selecting the right fiber optic [BWT15a]. Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 14 2. Absorption, another important factor to consider is the absorption properties of the fiber optic. If the light is absorbed by the fiber, it will never be detected by the spectrometer. For these reasons, to pay close attention when selecting a fiber for a specific application is extremely important. In a multi-channel spectrometer many optical fibers are needed, it allows measure different objects at the same time, so the use of fiber optic bundles are required. A fiber optic bundle is defined as any fiber optic assembly that contains more than one fiber optic in a single cable. The most common example of a fiber optic bundle is known as a bifurcated fiber assembly. 3.4 Exposure time The exposure time is a term used in photography which refers to the time that a photographic film or electronic image sensor is being exposed to light, where exposure is the amount of light per unit area as determined by shutter speed, lens aperture and scene luminance. An image is overexposed when the amount of light that the image sensor receives overcome the maximum level which this sensor can detect, thereby it has a loss of highlight detail. On the other hand, an image could be considered as underexposed when it has a loss of shadow, the time that image sensor receives light is too short and the image turns too dark [Wik16a]. 3.5 Hot Pixel Hot Pixels or defective pixels are some detectors which are included in the image sensor and not performing as expected, indeed, they are damaged. Normally, these are found in sensors CCD or CMOS. Usually these pixels are invisible, but if each pixel is inspected, it can be detected because never changes its value. It could appear as a dark pixel, which never pases light over it, or bright which allows all light pass through it, creating a bright white pixel. Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 15 It is caused because the sensor which collects photons, named as pixels, establish electric charges to these photons, then the values are read as analog voltages to be later digitized. Sometimes leakage currents are electric charges which leak into sensor. These excess electric charges increase the voltage at the well (pixel) and make it look brighter than it should. Manufacturing variations will cause some pixels to have much more leakage current than others. These few pixels on each sensor are called “hot” [Roc16]. 3.6 Optical Filters Optical filters transmit light of different wavelengths, usually implemented as plane glass or plastic devices which have interference coatings. The optical properties of filters are completely described by their frequency response, which specifies how the magnitude and phase of each frequency component of an incoming signal is modified by the filter [MZ99]. Filters mostly belong to one of two categories. The simplest, physically, is the absorptive filter; then there are interference or dichroic filters. The absorptive filters ar made from glass, these compounds absorb some wavelengths of light while transmitting other. 3.6.1 Dichroic filter Dichroic filters (also called “reflective” or “thin film” or “interference” filters) can be made by coating a glass substrate with a series of optical coatings. Dichroic filters usually reflect the unwanted portion of the light and transmit the remainder. Dichroic filters use the principle of interference. Their layers form a sequential series of reflective cavities that resonate with the desired wavelengths. Other wavelengths destructively cancel or reflect as the peaks and through of the waves overlap. They can be used in devices such as the dichroic prism of a camera to separate a beam of light into different coloured components. The basic scientific instrument of this type is a Fabry–Pérot interferometer. It uses two mirrors to establish a resonating cavity. It passes wavelengths that are a multiple of the cavity’s resonance frequency [Wik16d]. Master Thesis of Mario Eduardo Zárate Cáceres 3 Theoretical Considerations 16 3.6.2 Full width at half maximum Full width at half maximum (FWHM) is an expression of the extent of a function given by the difference between the two extreme values of the independent variable (x1 − x2) at which the dependent variable is equal to half of its maximum value fmax/2 (see figure 3.6). In other words, it is the width of a spectrum curve measured between those points on the “y-axis” which are half the maximum amplitude. Half width at half maximum (HWHM) is half of the FWHM. FWHM is applied to such phenomena as the duration of pulse waveforms and the spectral width of sources used for optical communications and the resolution of spectrometers. The convention of “width” meaning “half maximum” is also widely used in signal processing to define bandwidth as “width of frequency range where less than half the signal’s power is attenuated”, i.e., the power is at least half the maximum [Wik16b]. f(x) FWHM fmax fmax 2 x x1 x2 Figure 3.6: Full width at half maximum [Wik16b] Master Thesis of Mario Eduardo Zárate Cáceres Chapter 4 Implementation Typically an assembled spectrometer consist of three parts, the sample mount, the optical system and the electronic transducer. This thesis is focused on take advantage from a simple version of a spectrometer which design is displayed in figure 4.1. The light is transmitted throughout optical fibers. It represents the entrance optic and the slit as is shown in scheme; then, the light is diffracted by grating and finally the photoelectric sensor receives spectra. Figure 4.1: Principle structure of diffraction grating base spectrometer design - most classic minia- turized spectrometers have this or similar structure [RCN15, Fig. 1.1] The main advantage of the proposed design, in comparison to other spectrometers, is the multi-channel entrance with 11 optical fibers. Consequently, it enables acquiring 11 different spectra. As shown in figure 4.2, the 11 fibers transmit light, then an image is acquired by a CMOS sensor. Immediately after, data is sent to computer through USB 3.0. Afterwards the data will be processed using C++ and OpenCV library. Additionally some libraries provides by manufacturers will be used. A simple model has already been assembled for previous researches and still being suitable for improve results about the topic. 4 Implementation 18 Diffraction grating Optical Fiber Image Sensor USB 3.0 Figure 4.2: System structure, the system requires to be connected with a computer through USB 3.0, it does not need external energy source 4.1 Image acquisition Photoelectric matrix sensor acquires image data from which spectra should be ex- tracted. The first step ought to be check all the image sensor features which is man- ufactured by Ximea. They furnish “XIMEA API software” package in order to drive their hardware. It is available in web page for all operating systems [Xim15f], Microsoft Windows 7 with Microsoft Visual Studio 2012 compiler (MSVC2012) will be used. The sensor has USB 3.0 and could be connected directly with computer. However it must also have USB 3.0 port; otherwise, it will not connect. The connection could not work correctly whether USB 2.0 is used. When all the setups have been done, the features are displayed on screen through “xiCOP”, one of the manufacturer’s softwares. Control panel shows exactly the model sensor in use. If the software does not display the image sensor, the connection must be checked or done again. The manufacturer model is MQ013MG-E2-BRD and the sensor model, EV76C560 (see figure 4.4, appendix A). This sensor is complementary metal-oxide semiconductor (CMOS) type. It was designed on e2v’s proprietary Eye-On-Si CMOS imaging technol- ogy. Its design has good performance in low-light conditions, also it has high-readout speed of 60 fps in full resolution. Very low power consumption enables this device to be used in battery powered applications [E2V15]. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 19 Figure 4.3: xiCOP - Ximea Control panel Figure 4.4: The Image Sensor was manufactured by Ximea GmbH [Xim15b] Ximea offers “xiAPI” software, an Application Programming Interface (API), which is an interface for all Ximea Cameras between the camera system and applications [Xim15d]. Since the software is implemented using C++, the “xiApi.h” header has to be included. Among the camera features, the most relevant are that the sensor is monochromatic and has 10-bits depth of ADC resolution. It means that each pixel has intensity values between 0-1023 and advantage from this feature must be taken. Furthermore, as the images have 10-bits, they should be stored as “Portable Network Graphics” using file extension PNG. It supports images with bit depth higher than 8-bit, later when the images will be loaded; all the data will be available. If it is saved with JPEG extension, the images will be reduced to 8-bits, losing valuable information. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 20 The acquiring process is important since a good quality image enhances the re- sults, this approach will grant reliable images by controlling a suitable Exposure Time tET [ms]. The exposure time can work from 16 µs until 10 sec according to their tech- nical sheet [Xim13, p. 22]. Nevertheless, experimentally it recommends only set val- ues between 100 µs to 1000 ms, out of this range the Ximea API crashes returning XIA(b598):xiSetParam (exposure) Finished with ERROR: 11. Setting tET = 500 ms, image with LED white light looks good as shown in figure 4.5a. How- ever, when incandescent light bulb is used, the image is overexposed. Many pixels have reached 1023 and valuable information is being lost (see figure 4.5b). (a) (b) Figure 4.5: (a) White light source (tET = 500 ms) (b) Incandescent light bulb (tET = 500 ms) In order to learn how image sensor works, the input value (Exposure Time) was changed increasingly with equal steps. Meanwhile, the observed parameter was the pixel with highest value in the whole image (maxi [counts], the intensity per pixel be- tween 0 - 1023); although, only one pixel is being checked. Some sensor has manufactur- ing defects known as “hot pixel”, these values should be neglect using a command from Ximea API, correction of sensor defects could be done setting xiSetParamInt(xiH, XI_PRM_BPC,1) [Xim15e]. Considering now that there is not defective pixels, the observed value is maxi. Then, using different light sources (see figure 4.6a), the lowest value occurs without light source. LED white light is higher; but it does not attain the top. On the other hand, when incandescent light bulb is used, rapidly a highest value is attained. Nevertheless, knowing how many pixels have a value of 1023 is not possible. Perhaps, another problem could happen here, overexposure. If an image is overexposed, data will be lost. Thus Auto Exposure (AE) control should be done. The aim is to have at least one pixel with the highest value, which means that all the range (0 - 1023) is been used, taking advantage from Ximea 10-bits ADC. If the image is overexposed, time should be reduced until the image has less than 30 pixels (avoiding data loss) with 1023 (npixels=1023 < 30). Some light sources can not reach Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 21 1200 1000 1200 Set Point + npixels=1023 1000 800 800 tET = 400 ms 600 600 400 tET = 200 ms 400 200 200 tET = 100 ms 0 200 400 600 800 1000 0 5 10 15 20 25 30 Exposure T ime, tET [ms] time [s] Background Set Point = 1023 White light (LED) tET = 600 ms Green and Blue light (LED) tET = 800 ms Incandescent light bulb tET = 1000 ms (a) (b) Figure 4.6: System response to Exposure Time checking a single pixel, the highest. (a) Relation be- tween Exposure Time and intensity per pixel (maxi) depending on light source (b) Effect of different Exposure Time on intensity value using white light (LED) versus time domain set point (0 < npixels=1023 < 30). Furthermore, they can not reach the minimum value of 1023, even if the exposure time is the admissible maximum value (tET = 1000 ms). It yields steady-state error as is shown in figure 4.6b; where the reached maximum value was approximately maxi = 950. Besides, the results are almost similar using 600 ms, 800 ms and 1000 ms. Meanwhile there is a marked difference among the lower values tET . Thus, the system should be able to balance brightness changing tET until acceptable values; hence, it will be considered as a feedback system as is shown in figure 4.7. Disturbances r(k) + e(k) u(k) y(k) − Controller System ym(k) Measurements Figure 4.7: Photoelectric matrix sensor feedback system • System, image sensor is a system which its internal parameters are unknown. Input signal u(k) is the value of exposure time which is settable using xiAPI. Output of the system y(k) is an image which contains spectra. The system could receive external disturbance due to external light. Master Thesis of Mario Eduardo Zárate Cáceres maxi [counts] maxi [counts] 4 Implementation 22 • Measurements, to make feasible the control, the highest value will be computed from every image using minMaxIdx from OpenCV, which read every pixel and save the maximum value and the minimum within range (0 - 1023). If 1023 is attained, getting histogram with calcHist, pixels with a value of 1023 can be counted (npixels=1023 → np). • Controller, it will work with error in two range, first between 0 - 1023, reading the value of each pixel, and above 1023 counting pixels with 1023. The acceptable range should be between 0 and 30 pixels. The set point is 0 1023, np(k) > 0   np(k) < 30, r(k) = 1023 r(k) ⇒ (4.1b)  np(k) > 30, r(k) = 30   np(k) < 30, e(k) = r(k) − ym(k) e(k) ⇒ (4.1c)  np(k) > 30, e(k) = np(k) − r(k)   np(k) < 30, u(k) = kp · e(k) u(k) ⇒ (4.1d)  np(k) > 30, u(k) = kp · e(k) 2 As the image sensor is an unknown system, it was checked using random kp values, then responses with LED white light and incandescent light bulb are compared as can seen from figure 4.8, during this test the set point was 700. As illustrated by figure 4.8a, white light has a slow response when kp = 1. However, it is much better with kp = 100 and kp = 200. The set point is reached in less than 10 frames using the same values. When incandescent light is used (see figure 4.8b), the response is faster with kp = 1 due to its light is more intense. If kp = 100 is used, the system is over damped and with oscillations near the set point; but, when it uses kp = 200, the system becomes unstable. Experimentally, the output starts to oscillate when kp is 180 (kp = ku = 180). Using Ziegler–Nichols method [ZN42] for proportional control, the kp recommended is Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 23 half of the ultimate gain (0.5ku). Afterwards, kp can be found experimentally using 2 values kp = 0, 100, 200, 300, etc. Meanwhile, a low value is used, the system takes 2 longer to drop from overexposure. However, if the value is too high, it will became unstable. The value recommended after various assessments is kp = 400. 2 1300 Set Point = 700 1300 k = 1 1100 p 1100 kp = 100 900 kp = 200 900 700 700 500 500 300 300 100 100 0 5 10 15 20 25 30 0 5 10 15 20 25 30 time [s] time [s] (a) (b) Figure 4.8: Exposure time in time domain (a) White light (LED) (b) Incandescent light bulb kp = 90, kp = 400 2 npixels=1023 < 30 1023 900 700 500 Set Point = 1023 Set Point+ npixels=1023 300 Incandescent light bulb White light (LED) 100 Test 0 10 20 30 40 50 60 70 80 90 100 time [s] Figure 4.9: Response versus time domain of Auto Exposure time based on Proportional control Subsequently, values found in the system were checked. The responses were steady and good enough in spite of certain disturbances as show in figure 4.9. Here the “y- axis” is only until 1023, reading values higher than it (only 10-bit resolution) is not possible. The same axis is being used to show the quantity of pixels with 1023. The set point is to have a number of pixels with a value of 1023 between 0 and 30. The LED white light is slower reaching the maximum value and takes approximately 20 frames; Master Thesis of Mario Eduardo Zárate Cáceres maxi [pixels] maxi [counts] np [pixels] maxi [pixels] 4 Implementation 24 although it presents steady-state error impossible to solve due to it depends on external conditions such as light source intensity. Then, the intensity is not strong enough to reach the set point (green points). On the other hand, when a more intensity light is used, like the bulb, it reaches set point prompt, among 2 or 3 frames and keeps steady (red points). Finally, the last test was interchanging both light simulating external disturbances and the responses were good and prompt, (blue points). These results are shown in figure 4.9 and are enough to achieve the main aim which is get spectra from image; even so, the system could be improved in future works. The program is based on Ximea’s samples code, this shows how to capture one raw image data from sensor without any post-processing [Xim15d]. As the aim is to made some basic image process and then save the image, it has to be processed using OpenCV; furthermore the manufacturer gives some support to do it [Xim15c]. Sum- marizing the acquisition process, the flowchart (see figure 4.10) explains the process. • Setup device, firstly a device is created and open in order to have hardware access. It could be done using xiOpenDevice. Once the device is open, using xiGetParamInt, features from device can be read. Then, depth image (10-bits in this case) is requested, xiSetParamInt activates correction of sensor defects (XI_PRM_BPC) and (XI_PRM_LED_SELECTOR) turns off leds (digital outputs). Then, memst sets the first num bytes of the block of memory pointed to receive image from image sensor [Xim15e]. • Create variables, it create variables and set the initial values – Matrix m, to save the current image – SPy = 1023, set point to spread the image in all the range available – SPn = 30, minimum number of accepted pixel to avoid overexposure – np = 0, it saves the number of pixel with a value of 1023, pixels – kp = 90, proportional factor considering [np 6 30] – kp = 400, proportional factor when np > 30 2 – r = 0, set point depending on np – u = 0, input signal to system, [µs] – e = 0, error between set point and current value – ym = 0, value of the highest pixel in image(0 - 1023), [counts] – tET = 0, exposure time, [µs] • Read Ximea buffer, put image in Matrix m (Class Mat, OpenCV) denoting it has 10-bits and is in gray scale (CV_16UC1) to work with Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 25 Start Setup device Create variables Read Ximea Buffer maxi → ym ∑ pmax → np yes np < SPn no r = SPy r = SPn e = r − ym e = np − r u = kp · e u = kp · e 2 tET = tET + u Saturation Set tET Fix and yes Fix tET ? Save tET no Store yes Save picture? image no no Quit? yes End Figure 4.10: Flowchart of Auto-Exposure Time • Saturation, it limits values between tET = 100 µs and tET = 990 ms • Set tET , set exposure time (µs) in device • Fix tET , once the system is steady, the user can fix the value • Save picture, user determines when the image is saved The complete code is attached as “DisplayCamera” within folder “Program” in CD- ROM. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 26 4.2 Finding orientation Now, images with good quality can be stored; as next step, they must be uploaded into a new program to process the data. The aim is to compute 11 spectra, but first, the image region, where each spectrum is located must be found. The newly images contain spectra from LED white light (see figure 4.11), one of them is considered as reference because this light source is evenly spread and allows to view 11 channels. Perhaps, spectra could be rotated or not due to mechanical assembly of grating. How- ever, the system should be able to get their orientations. The imported image size is 1280 x 1024 pixels; moreover, it is monochromatic and has “.png” format. x [pixels] 0 200 400 600 800 1000 1200 200 400 600 800 1024 Figure 4.11: Newly acquired image from sensor using LED white light When the image is loaded in C++, through OpenCV (imread), the color type of image should be specified. It is a gray scale image (CV_LOAD_IMAGE_GRAYSCALE) and 10-bit depth (CV_LOAD_IMAGE_ANYDEPTH), otherwise, the image can not be used; normally 8-bit images are assessed. Using OpenCV, “depth” means the quantity of bits that each pixel has. Most of algorithms in OpenCV run with 8-bit image; thus, if it desires to work with it, some changes will be done in the image in order to display and manipulate in C++. The best option in depth is 32-bit floating point, thereby a function was developed in order to convert an image from 10-bit to 32-bit floating point depth. The function is detailed in program 4.1. Basically it accesses pixel per pixel and then, each pixel into the maximum value of each type, in our case 10-bits has a value of 1023, which is used as divisor. The new image has values between 0.0 and 1.0 at floating point precision. Master Thesis of Mario Eduardo Zárate Cáceres y [pixels] 4 Implementation 27 1 //Convert an image from CV_16UC1 to CV_32FC1 2 void Convert16UTo32F(Mat image, Mat &sal,int bdepth){ 3 double nbits=pow(2,bdepth); //Number max of 2^bdepth 4 int i= image.cols;//1280 5 int j= image.rows;//1024 6 for (int k=0; k(l,k); 9 //Read the current value in the first matrix 10 sal.at(l,k)=val/(nbits-1.0); 11 //Each value in the maximun value of bits available 12 } 13 } 14 } Programm 4.1: Convert an image from CV_16UC1 to CV_32FC1 Background White light (LED) 0 100 200 300 400 500 600 700 800 900 1000 Intensity (10-bits) Figure 4.12: Histogram of white light source and background with tET = 620 ms Spectra will not be diffracted always having the same orientation, it could change due to mechanical influences. Working on the whole image, blur filter (kernel size(3x3)) was employed, the filter is expressed in equation 4.2 [Ope15a]. Later, a threshold filter helps to detach the region from background. When it and white light image are compared looking at their histogram (see figure 4.12), the peak in both case are near to 100 pixels (white dashed line). The background has values approximately until 250. Then, if threshold value is the value where the highest intensity is (≈ 100) plus its half (≈ 150) (red dashed line), it will stay only the patch of channels. This threshold value will be calculated for every image, always as the peak plus its half.   1 1 1 . . . 1 1        1 1 1 1 . . . 1 1   K =   (4.2) KSize.width · KSize.height    . . . . . .        1 1 1 . . . 1 1 Master Thesis of Mario Eduardo Zárate Cáceres Number of pixels 4 Implementation 28 After applying both filters, output enables to watch main patch. Now, the matrix are in floating points (0.0 − 1.0), but to use other OpenCV tools, it needs to change in 8-bit image. It can be done dividing all the image by the highest value from the image (function convertScaleAbs). Continue working with 10-bits is not significant in this section. Then, using function findContours retrieves contours from the binary image using the algorithm Suzuki [S.S85]. This algorithm return an array of points which describes contours. They were drawn with red color in figure 4.13 step (1) and it is described in equation 4.3.   k, contour number contourski(x, y) ⇒ (4.3)  i, contour points Then using these arrays, the area of each contour is calculated with contourArea using a special case of Green’s theorem, Shoelace formula, which was described by [Mei69]. It is a nifty formula for finding the area of a polygon given the coordinates of its vertices [PSW16]. Consider a polygon made up of line segments between N vertices (xi, yi), i=0 to N − 1. The last vertex (xN , yN) is assumed to be the same as the first, e.g.: the polygon is closed [Bou88], equation 4.4 described the method. 1 n−1 ∑ Ak = (xiy(i+1) − x(i+1)yi) (4.4) 2 i=1 Later, all of the calculated areas are compared and the number of the biggest area (kmax) is stored. This contains the spectra area. Each array of points can be considered as a polygon; thus, the center of mass can be computed. The centroid of a non-self- intersecting closed polygon is defined in equation 4.5 [Bou88]: ∣ ∣ 1 n−1 ∣ ∣ ∑ = ∣ ∣ Cx ∣ (xi + x(i+1))(xiy(i+1) − x(i+1)yi)∣ (4.5a) 6A ∣ ∣ i=1 ∣ ∣ 1 n−1 ∣ ∣ ∑ = ∣ ∣ Cy ∣ (yi + y(i+1))(xiy(i+1) − x(i+1)yi)∣ (4.5b) 6A ∣ ∣ i=1 Here, the considered points are consecutive vertices in order of their ocurrance along the polygon’s perimeter. Moreover, its last point is also the start point (closed area). This values were computed for all the found blobs and were printed as orange circles in figure 4.13 as result of step (1) as well. Class Moments from OpenCV computes the same points calculating the spatial moment. This concept can be extend to discrete images by forming spatial summations over a discrete image function F (i, j) [Pra07]. OpenCV computes the spatial moments as shown in equation 4.6 [Ope15c]: Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 29 ∑ mji = (array(x, y) · xj · yi) (4.6) x,y A moment represents a specific quantitative measure. The points represent an area, then the zeroth moment (m00) is the total area. The first moment (m10, m01) divided by the total area is the center, named as “center of mass” or “center of gravity”, so theses points are defined by equation 4.7, using the calculated values in 4.6: m10 m01 x̄ = , ȳ = (4.7) m00 m00 Now, the biggest contour area, where spectra are located, is known (contoursk ) max and its center of mass (x, y) as well. In step (2) center of mass (x, y) will be printed as a green circle; then, the minimum area enclosing the points is computed (function minAreaRect). Simple way to get the smallest-area enclosing rectangle can be gen- eralized in two ways: several sets of calipers can be used simultaneously on one convex polygon, or one set of calipers can be used on several convex polygons simultaneously. “The rectangle of minimum area enclosing a convex polygon has a side collinear with one of the edges of the polygon” [Tou83]. The reference image should be taken from light source which one the spectrum is spread as much as possible in the visible range (400 nm - 780 nm). Sunlight or LED white light can furnish it. Using LED white light spectra are uniformly spread. It is a source easy to get; besides, it is reliable. Hence, LED white light source is recommended for calibration process. Afterwards minAreaRect, which uses Class RotatedRect, is used, the process returns 4 points (P1, P2, P3 and P4) which are saved in a RotatedRect object. Points are printed in figure 4.13 as yellow lines. If the points are known, measure their sides will be possible. (P1P2, P2P3, P3P4 and P4P1). Since the side, which is parallel to direction channels, is the largest than the orthogonal lines, orientation channels can be calculated. In this test, P4P1 is the largest side and is drawn as green line. Rotation angle can be obtained getting the arctangent (atan2) using P1 and P4. Later, in step (2) a orthogonal line, which is crossing exactly center of mass (red line), must be calculated. This new line will cross all the channels. If every pixel along this perpendicular line is checked, spacing points among channels can be found. This orthogonal line is going to be the inspection line (Ln) and it will be used to find separating points among channels. Finally, in step (3), the starting position is set. As it is unknown where exactly each spectrum begins, to start the inspection the initial point is fixed to the left, a third of the longest side. All the sequence followed is shown in figure 4.13 and the code is attached in appendix F. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 30 (1) (2) P2 P3 (3) (Cx, Cy) P P 1 4 Figure 4.13: Finding orientation process, firstly the image is blurred; then, a threshold filter, with the value of intensity peak plus its half, is used (1) find contours and center of mass for all blobs (2) consider only the biggest blob, find the minimum rectangular enclosing area and its largest side in order to calculate its orientation. Compute a orthogonal line which crosses center of mass (3) finally, the orthogonal line is shifted third part of the largest size to the left. The system stores in memory an angle which is the spectra inclination. It could work with an initial image even though, it is rotated 90◦. When the process was carried out, the inclination was always found. This procedure should be repeated thousand of times, in many different condition in order to find some specific cases where the system fails. It could be an interesting quality control. Mention that this procedure would work only if the number of channels is 11 or less is important. With more channels another criteria should be taken. This is because the largest size would not be parallel to spectra and a wrong direction would be considered. However, it works quite well in the current conditions. This could represent a drawback which should be improved in future works. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 31 4.3 Image decomposition In order to yield good results at least two images must be acquired, one is reference image from LED white light and the other is background image which must be taken in dark. Moreover, there are “hot pixels” and they distort the information obtained. Fortunately, these pixels will be discarded when the background is subtracted. Fig- ure 4.14a displays the original image which has been acquired with tET = 620 ms, subsequently in figure 4.14b background has already been subtracted. Afterwards, the 11 spectra are clearer than before. (a) (b) Figure 4.14: Reference image to detect channels (a) White light LED, tET = 620 ms (b) Background subtracted, tET = 620 ms Apparently both images look similar; however, the noisy background was erased. The new image is sharper and space among channels are darker. Both images have been taken with the same exposure time. Otherwise, negative values could appear hindering decomposition process. If one random vertical line along the image is in- spected, differences between original image and the new one without background can be observed (figure 4.15). Besides, that some noise has been removed is noticed. Knowing the inclination angle and the start line, the new image, as result of back- ground subtraction, can be used to work. If each pixel along the start line is checked, intensity changes can be detected. Ideally, this test line (Ln) would be vertical, in this case, there is no problem because only one column of the image would be inspected (i.e. [100;0:1024]). Nevertheless, when the test line is rotated, the size of this line is bigger than the number of columns. This new line has pixels which floating (x, y) coordinates (see appendix G). Reminding, pixel in original image are integer number; therefore, these values do not fit exactly with the existing position, e.g. the pixel P (x, y), which is shown in figure 4.16a, fits in a pixel, red box, although it is not the middle. Thus, it Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 32 800 600 400 200 Without Background With Background 0 200 300 400 500 600 700 800 900 1000 y [pixels] Figure 4.15: Data from random vertical line should not take the value from that pixel. But if neighbor points are considered, the green ones, computing the average value of this new floating point is possible. Image has two variables, hence bilinear interpolation is used for interpolating on a rectilinear 2D grid. xi x ′ y Ln: test line x y Q3 R j y 2 Q4 2 y P y Q1 Q2 y P (x, y) 1 R1 x1 x x2 x (a) (b) Figure 4.16: (a) Test line along image (b) Bilinear Interpolation The pixel P (x, y) and its neighbor pixels were extracted and displayed in figure 4.16b to explain how the method works. First the method make an interpolation in “x-axis” (equations 4.8a), then repeat the same process in “y-axis” (equations 4.8b). Afterwards, these two new points are interpolated once more time to get the new value of P (x, y) (equations 4.8c). In OpenCV getRectSubPix makes this interpolation. x2 − x x − x1 R1(x, y1) = · Q2 + · Q1 (4.8a) x2 − x1 x2 − x1 x2 − x x − x1 R2(x, y2) = · Q4 + · Q3 (4.8b) x2 − x1 x2 − x1 y2 − y y − y1 P (x, y) = · R2 + · R1 (4.8c) y2 − y1 y2 − y1 Master Thesis of Mario Eduardo Zárate Cáceres z [pixels] 4 Implementation 33 This method yields suitable values for each pixel in the inspection line. This new line will be considered as an independent array of data. The position of each pixel is taken as “x′-axis” (see figure 4.16a). According to figure 4.17, the values are passing over all the channels and they can be described by equation 4.9. Along this new axis the values become integer again.    0 ≤ x′ < |Ln|  0 ≤ z < 1024 z = f(x′) ⇒ f(x′) = , z = (4.9)  ′  x ∈ Z z ∈ Z This process continues with a image without background, but it still has noise which should be removed. Thus, blur filter was used in order to erase noise and try to get smooth waves. If the kernel is small (3 x 3), it will not remove peaks efficiently. Whereas the kernel increases, results become better. Nevertheless, if it is too high, relevant data could be dismissed. A kernel (13 x 13) works suitably for channel identification. The results can be observed in figure 4.17, discern the channels is easier now. Some channels are lower than the others, but, on sides, the channels are not clear enough to differentiate them. The lowest point between two highest points is the spacing between two channels. The assessment of data in many lines crossing the image from top to bottom would allow finding these spacing points. Data derivation would return values of highest and lowest points. If it is done with original data, the results would be useless; because there are noise and they have a lot of little peaks. 800 Blur (13x13) Original 600 400 z0 zn 200 zmin + window 0 200 300 400 500 600 700 800 900 1000 x′ [pixels] Figure 4.17: Interpolated points along one random line in image, the original image is noisy. Mean- while, blurred image is smooth. Bottom points represent spacing between two channel The aim is segment the image, so peaks in top and bottom are located where the first derivate is zero (f ′(x′) = 0). These points represent the middle of each channel and the spacing points between two consecutive channels. Working on the blurred image presents better results finding the peaks. Afterwards, only the lowest points (zn) are chosen; due to these represent the spacing points. Enlarging a portion of the test line (dashed red box in figure 4.17), data after and before applying blur filter are different, evidence for this is in figure 4.18. Master Thesis of Mario Eduardo Zárate Cáceres z [pixels] 4 Implementation 34 800 750 700 650 600 550 500 450 400 350 f ′ 300 (y) = 0 Blur (13x13) 250 Original 600 610 620 630 640 650 660 670 680 x′ [pixels] Figure 4.18: Intensity values along test line, zoom from figure 4.17 If only the conditions mentioned before are considered, some shortcomings could ap- pear. Hence, new requirements to be consider as key points (zn) should be established. 1. f ′(x′) = 0 ⇒ z1, z2, ..., zn−1 2. zn = zmin + window ⇒ z0, zn 3. ∀{zn > (zmin + window)} A low window is considered to dismiss noisy values below 40 pixels above minimum value per image. The value was set after many assessment and work very well (see figure 4.17). The first condition is given by figure 4.19, here data represents the first derivate in a random test line. Then, all the points, which cross the value of zero, were found. Between 0 and 240 pixels approximately there are many points which cross zero, but in this part, third condition is not fulfilled. These points are in top as much as in bottom, however, noise is still readable. Whereby many needless or redundant points must be considered (see appendix H). All the points whose values fulfill the conditions are going to be printed in a empty image. Then, using again blur (9 x 9) filter, threshold (10) and findcontour algorithm, the program should return patches in groups; afterwards, the center of mass is calculated and saved; these steps can be observed in figure 4.20. Master Thesis of Mario Eduardo Zárate Cáceres z [pixels] 4 Implementation 35 20 0 −20 Blur (9x9) 200 300 400 500 600 700 800 900 1000 x′ [pixels] Figure 4.19: dz = f ′(x′), peaks in the inspection line (a) Original (b) Blur (c) Gray + Threshold (d) Result Figure 4.20: Processing image cropped in [630:790,750:910] All the detected points are either on top or in bottom of wave. This process has not always the same results due to some noise. Hence, the process should be done many times, taking different test lines and trying to cover the whole width of image. Sometimes, the found points are not so close and it creates new ones which are wrong, but some conditions will avoid it: • The test line must have at least 10 points, it means than 5 from the 11 channels are bright enough • Observe each found point in test line and check that points are alternated, if they are not, discard this test line Master Thesis of Mario Eduardo Zárate Cáceres dz 4 Implementation 36 Figure 4.21: Top and bottom detection in [630:790,750:910], many points around one peak are con- sidered only as one, besides the bottom is key point (z3) Looking closely figure 4.21, many points were detected around top and also on bottom of wave. All of them are in groups and each group represents only one point. As is known, the image contains 11 channels, so they should be 23 points, but after many assessment and some image inspections, the first channel (bottom of the image), and the last channel (top of the image), are not easy to disjoin because they are blurry. Only the lowest points are useful to find the space among channels. The number 2 is higher than 3 as is shown in figure 4.22. All of the odd numbers are bottom points; meanwhile, on the other hand, if number 3 had been higher than 2, paar numbers would have been bottom points. It will happen if their spacing channels is readable. As is known that there exists two consecutive channels, these can be disaggregated. As the first and last channel (bottom and top channels in figure 4.14) are not always detectable, all the distances among spacing points (considering as width channel wch) are saved. The values are shown in table 4.1. Later, the size of widest channel will be considered as the width of first and last channel. Steps explained before should be carried out as many times as possible. In this research, jumping 1, 2, 3, 4, 5, 10, 20, 30, 50 and 100 pixels were tested. Checking each pixel would be the best option, but it could take many iterations and time consumption. After many test, jumps of 25 pixels is recommended, checking 20 test lines. Good results to segment the channels were produced. In fact, the use of many test lines is done due to sometimes the found points are not suitable. If many points are detected and most of them are redundant, it would guarantee reliable points. The next restriction should avoid considering test lines where the spacing points are not clear enough: • The number of points per test lines must be repeated, at least 4 times in different lines, in order to be considered as reference • Each test line must have the same number of points as the reference has; other- wise, all the points along this test line will be discarded Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 37 Figure 4.22: Found points along random test line After the points were saved, they are compared horizontally to fit a line using linear regression as shown in equation 4.10, where n is the number of considered points. ∑ ∑ ∑ n xy − ( x)( y) m = ∑ ∑ (4.10a) n x2 − ( x)2 ∑ ∑ n y − m( x) b = (4.10b) n y = mx + b (4.10c) Only the points, which are going to be considered as part of linear regression, are joined by line. They obey all the condition mentioned before. The middle point of the considered values are drawn as green. They represent the pivot points to rotate the spacing lines (see figure 4.23a). The first and last channels do not have good quality, but assuming that both have roughly the same width as the size of the widest channel from each test line, these points are assumed. They fit very well and are also going to be considered for compute channel segmentation. Once the lines have been computed, the results are printed with red lines in figure 4.23b. These lines are little odd, because they are not parallel to each other, although ideally, spectra are parallel. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 38 (a) (b) Figure 4.23: Linear regression in channel points, the rotation angle is the same for all spac- ing lines and is saved as ∡α (a) Points founded after inspect many inspection lines (b) Horizontally linear regression using the spacing points Considering the premise that all spectra are parallel, the lines should be alike. Com- puting an average slope using equation 4.11 is possible. As all the lines have the same slop (m), all of them have the same rotation angle. Then, it will be saved as ∡α which is the rotation angle of spectra. Once the new slope is known, the new lines must cross over middle points, to finally create lines which segment the image in 11 channels (nch = 11). nch+1 ∑ m ynew = mx + b ⇒ m = i=1 (4.11) i The new lines with the average slope (m) were printed in green lines as shown in figure 4.23b. Now, the image has been decomposed in 11 channels. Since this process has many loops and high time consumption, the steps mentioned before should run only once. Always that the system would be calibrated. Two points per each line are going to be saved in “Channels.data” file (table 4.1), the two points represent the boundary between line and image edge. The number of spacing lines are going to be always number of channels plus one (nch + 1). Moreover, the width of each channel is included (wch). The first space in table 4.1 is empty because this distance is calculated using two consecutive points. Next time the program runs, load the “Channels.data” calibration file will be possible. This will decrease time processing in followings runs. The values displayed are only referential, because these will change for every taken image, but the stored variables will be always alike. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 39 Table 4.1: “Channels.data” calibration file, Point1 and Point2 represent start and end coordinates for one channel separating line, wch is the channel width between two consecutive spacing lines, there is no channel 0 Point1 Point2 Line wch x y x y 0 0 970 1280 967 0 1 0 892 1280 888 78.7656 2 0 813 1280 810 78.7465 3 0 752 1280 749 60.9367 4 0 698 1280 695 53.7056 5 0 641 1280 637 57.4629 6 0 580 1280 576 61.0726 7 0 517 1280 514 62.8154 8 0 472 1280 468 45.3514 9 0 393 1280 390 78.7667 10 0 314 1280 311 78.7473 11 0 235 1280 232 78.7663 4.4 Calibrating the wavelength To display the wavelength values correctly, the measurement device has to be cali- brated. Calibration is the setting or correcting of a measuring device or base level, usually by adjusting it to match or conform to a dependably known and unvarying measure. Consequently the position in pixels of wavelength values should be acquired. As was mentioned in section 3.2.1, visible light is between 390 nm − 700 nm (VIS) of wavelength, but their location in the acquired image are unknown. Provide light source whose wavelength is known would help to find the position of each value. The equa- tion 4.12 is used to get calibration wavelength coefficients, linearity coefficients. The wavelength for spectrometers will drift slightly as a function of time and environmental conditions [Opt10, App. A]. Periodic calibration is recommended. λ(p) = C1 p + I (4.12) Where: λ: the wavelength of pixel p C1: the first coefficient, [nm/pixel] I: The wavelength of pixel 0 p: position, [pixels] Rλ: the reference intensity at wavelength, λ [nm] Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 40 Light sources whose wavelength value are known will be needed. The spectral band- width of light source is important, the narrower it is, the easier it is to recognize its position. The best option is to use laser diodes, an electrically pumped semiconductor laser, which produces narrow waves with a low error (±10 nm). In this process three laser were used, but only two were useful, the laser are shown in figure 4.24. Red and Green laser were used. Nevertheless, Blue one was not considered because its value (405 nm) does not appear in all the channels. Besides, when the image is acquired, the spot is near image boundary (see appendix C). Optical fiber (∅0.3 mm x 11) Red Laser Blue Laser (650 nm ± 10) (405 nm ± 10) Max Outout<1mW Max Outout < 1mW Class II Class IIB Green Laser (532 nm ± 10) Max Outout < 1mW Class III Figure 4.24: Laser diode for system calibration, but they are not used at the same time Only these two lights were used as reference because their spectra are narrow and useful to recognize in the image; both are shown in figure 4.25. A linear behavior among the channels was discovered after many assessments. Whereby, only two reference light would be enough to consider; nevertheless, many light sources were used and compared for validate the results. Exposure time plays again an important role acquiring images. Sometimes the image with less exposure time is better if intensity of input light is too high. Consequently, the background image must have been taken with the same exposure time. The image in figure 4.26a is better than 4.26b. the first does not have too much noise and is easier to find a spot in each channel. The spot represents the spectrum position of Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 41 (a) (b) (c) (d) Figure 4.25: (a) Red laser diode [Las16] (b) Spectrum (λ = 650 nm) (c) Green Laser diode [Las15a] (d) Spectrum (λ = 532 nm) used source. On the other hand, when the exposure time is too high the spot found is wider and represent a hurdle to achieve good results. A perfect narrow light source depends on spectrometers slit, which is in this case the optical fibers end at the same time. (a) (b) Figure 4.26: Image acquired using green Laser diode with different exposure-time (a) tET = 100 ms (b) tET = 600 ms Ideally all spots should have a linear disposition due to the fiber optics are mounted one next to the other. However, it is not easy make a perfect built, whereby there are deviations among them (see figure 4.27). Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 42 Optical Fiber φ = 0.3 mm Figure 4.27: Optical fiber multichannel slit-laboratory sample with 21 channels [RCN15, Fig. 6(b)] Using the key points of each channel (table 4.1), every channel can be isolated and processed. If a new image is used to copy each channel, the new image will have only a spot per channel as is shown in figure 4.28b. First blur(9 x 9) filter is used to reduce image noise (figure 4.28c). Second a threshold filter will discard the background. As is known that the spot is the highest value in the image and the center point of this wave is desired, the value considered will be (ymax − 80). The value found should be greater than 100 (ymax −80 > 100), it is to avoid confusing low values with noise. Both values 80 and 100 where found experimentally (see figure 4.28d). Third, function findcontour returns an array of points that details contours from the binary image using the algorithm Suzuki [S.S85]. Finally the area of these contours are calculated with equation 4.4, and with the array of points is possible to calculate the center of mass but only from the bigger contour. Class Moments returns zeroth and first moment, then using equation 4.7 the center of mass is known and it is drawn as green point in figure 4.28e. Repeating this process on each channel, in both image (green and red laser as light source), the points can be detected (see figure 4.29). The code which makes it is attached in appendix J. Once the points were found, these will be compared one to each other and a new line can be computed using linear regression as well. These lines pass across the whole image, over the 11 channels in image. During calibration process, sometimes detect the spot in every channel is not possible. It is due to light is not led correctly, as was shown in figure 4.24. Since there is not exist a fixed connection this error will be present. Common spectrometers has optical fiber with coupling screw (Optical fiber connector such as SMA 905 or SMA 906 style); it could enhance the calibration process. However, if at least two spots from the 11 channels are found, join them with a line would represent the position in pixels of the wavelength. Each line (L532 nm and L650 nm) has its own slope; in spite of they should be parallel. Whereby, new lines will be computed using the slope average. These are rotated around a pivot point which Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 43 P2 P3 P1 P4 (a) Original image from λ = 650 nm (b) Channel cropped (c) Blur (9 x 9) (d) Threshold (ymax − 80) (e) Find contour + Center of mass Figure 4.28: Image processing of each isolated spot, the same procedure is repeated for each channel until generate an array of points. The same process is used in every light reference is located in the middle of channels, as there are 11 channels, channel 6 is the middle one. So, take a point above or below would be alike. During this process, a point below channel 6 has been considered; the points are shown in figure 4.30. Besides, the angle between this new slope and “x-axis” is named as “∡β” (L532 nm∡β, L650 nm∡β). (a) (b) Figure 4.29: Array of produced points by red and green laser diodes (a) Green, λ = 532 nm (b) Red, λ = 650 nm Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 44 L532 nm Pivot Point L650 nm β Figure 4.30: Two Wavelength Position lines were rotated around pivot point to become parallel, besides the angle between this new slope and “x-axis” is named as “∡β” Now, there are two line groups, one is the lines which decomposes image in channels and the second which indicates the wavelength position. Then, the intersection between them are computed and saved. These points will be used to find a linear equation that relates pixels and wavelength [λ]. As shown in figure 4.31, blue points represent a intersection between spacing points and position of wavelength, the lower intersection, below q1, will be not considered. The distance between P1(x, y) and q1(x, y) is named dn and its absolute array norm |dn| will be stored. The same value is calculated per channel (d1:11) and repeated twice (L532 nm, L650 nm). All values shown could vary with other images, but results are very similar. In order to reduce time processing and not repeat this stage, results will be saved in a file named “ValProChannels.data”. Angle ∡β and number of tests used, (L532nm, L650nm), as well. The stored values are shown in table 4.2. This stage is made only during calibration, next time previously stored data will be used. Working with the stored points, finding a relation between pixels and wavelength is possible. The stored distances d1:11 are different, then, each channel will have a different equation, in contrast to conventional spectrometer which only has one. Moreover each channel will have its own range. To evaluate it, not only these two light source were used, but also many others to compare. The values are shown in table 4.3. The p1 was made using optical filters “Metall-Interferenzfilter” which are bandpass filters, only transmit a certain wavelength band and block others (see appendix B). The points p2 was made using “Edmund filters” which are Hard Coated Bandpass Filters, but these have until 50 nm of FWHM. Spectra using these filters are not narrow enough, they are wider and useless [Opt15a]. Last assessment, p3, was done only using green and red diode laser whose wavelengths are known. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 45 x [pixels] 0 200 400 600 800 1000 1200 d11 200 400 d6 600 800 P q 1 d 1 1 1024 Figure 4.31: Distance between intersection point (q1) and channel point P1 is named d1 and it will be stored to compute later a linear equation. The same process is repeated in every channel Table 4.2: “ValProChannels.data” file is stored as a calibration file, afterwards this file will be used to find rapidly wavelength position -β[Rad] #test -1.45607 2 Channel L532nm L650nm d1 320.676 633.094 d2 329.722 642.173 d3 336.729 649.228 d4 342.932 655.473 d5 349.512 662.127 d6 356.520 669.183 d7 363.725 676.407 d8 368.928 681.675 d9 377.969 690.748 d10 387.044 699.885 d11 396.119 709.022 The values were compared in a diagram shown in figure 4.32. The blue circles λ(p1) represent the first test, the relation among these points is linear. The linear regression made with these values is plotted in black λ(p) and fit very well with all the points. Last assessment was made using only p3. Table 4.3 is available per each channel. Master Thesis of Mario Eduardo Zárate Cáceres y [pixels] 4 Implementation 46 Table 4.3: Position test of wavelength in channel 1, at the same time other channels were inspected, however only channel 1 was considered as representative sample Rλ[nm] p1[pixels] p2[pixels] p3[pixels] 475 178.170 – – 500 228.371 217.779 – 525 290.379 – – 532 – – 320.676 575 410.836 – – 600 469.885 479.895 – 625 551.935 – – 650 – 614.648 633.094 675 668.042 – – 700 728.954 732.007 – 725 800.643 – – 750 880.374 923.784 – 775 940.964 – – 800 1038.760 1024.830 – Only the first channel (λch0) and the last one (λch10) were included as triangles. They were computed using only two points, the result are parallel lines to λ(p) which was computed using 12 points; although it is shifted in 4.13 nm. In order to find the wavelength position, many points are not necessary. Only two suitable light sources are enough, but these must be narrow and as accurate as possible. 820 780 740 700 660 620 580 540 500 λ(p1) 460 λ(p) = 0.38 · p + 415.01 λch0(p) = 0.38 · p + 410.88 420 λch10(p) = 0.38 · p + 387.88 380 200 400 600 800 1000 p [pixels] Figure 4.32: Wavelength position in channel 1, λ(p1) were made using interferometers, λchx(p) were made using laser diode, each channel has a different equation Master Thesis of Mario Eduardo Zárate Cáceres λ [nm] 4 Implementation 47 The aim is to recommend a calibration process easy to repeat. If three tests were made and the last one only needed two light sources producing similar results. Thus, this process will be recommended for next calibrations instead of the others. This calibration process uses only two laser diode. In addition, optical filters, due to its operating principle, enable a different bandpass depending on the angle that the optical fiber has when it receives the light; hence, it will not produce suitable values. As the two wavelength position lines are parallel, they have a slope alike. Consider- ing the equation 4.12 already explained, the new value would be C1 = 0.38 for all the channels. However, the initial value (I = λi) would be different per channel, even so the equation 4.13 will be used for every channel. The conclusion is that the system can inspect light into the image size (1280 x 1024). The table 4.4 shows the limits in each channel. Perhaps, these values could change due to some mechanical changes, produc- ing geometrical deviation. Moreover, each 2.63 pixels represents 1 nm in wavelength, this result ease the task of computing the spectra. λ(pn) = 0.38 p + λi (4.13) Table 4.4: Wavelength range in channels, each channel has a different range due to image boundaries pi = 0 [pixels] pf = 1280 [pixels] Channel λi [nm] λf [nm] 1 410.88 897.28 2 407.48 894.25 3 404.85 891.25 4 402.53 888.93 5 400.07 886.47 6 397.45 883.85 7 394.74 881.14 8 392.80 879.20 9 389.41 875.81 10 386.01 872.41 11 382.62 869.02 4.5 Cropping channels The image has already been decomposed in 11 channels, key points of every channel and position of wavelength are known. Then, next step is to crop each channel and access each pixel value in order to compute spectrum. Just to recap, the position and values in image are integer, but the points which crop the image are not necessarily Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 48 integer numbers; otherwise, these are floating points. The used procedure before in test lines is going to be applied again here. Bilinear interpolation, mentioned in section 4.3, is the best way to compute (floating points) the value of each pixel, somehow keeping relation with the original image. The aim is easier using polar coordinates. (function phase calculates the rotation angle of 2D vectors and polarToCart calculates “x” and “y” coordinates of 2D vectors from their magnitude. The direction of a angle is considered in clockwise [Ope15b]). There are 4 points per channel, as shown in table 4.1, these points enclose each channel and are represented by P1, P2, P3 and P4 in figure 4.33. If a line join the points, one channel will be enclosed and ready to be segmented. This lines have a rotation of ∡(360◦ − α)(clockwise). Now, a new image only with one channel using values from original image can be created. The new image size, which includes only one channel, is given by equation 4.14.  ∥ ∥  ∥ ∥  widthcolumns = ∥P1P2∥ match ⇒ wch (4.14a)  heightrows =  cos(γ) γ = 90◦ + α − β , θ = 180◦ − β (4.14b) xn x P2 ym |A| = 1 γ θ y P4 |B| = 1 w P ch 1 α P3 β Figure 4.33: Cropping channels from image First, starting in P1, wherever it is, using a unit vector (|A|∠(360◦ − α)), each point along P1P2 will be inspected. Furthermore every step forward, using other unit vector (|B|∠θ) every pixel along this direction is inspected until attain the line P3P4. All the values which were found go to a new matrix (match). “θ” depends on “β” (see equation 4.14b). The new values are located in new axis x′ and y′, which contains one spectrum. The new matrix is shown in figure 4.34, this matrix only contains one channel; however, original image has 11 channels. This process will be repeated until have 11 matrix with the values of each channel. The code which makes it possible is attached in appendix I. Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 49 x′ [pixels] 0 100 200 300 400 500 600 700 800 900 1000 1100 1200 30 60 Figure 4.34: New created matrix (mat6) from channel 6 which now is region of inspect (ROI) 4.6 Computing Spectra In contrast with a typical spectrometer which works only with linear data, image acquired provide matrix data. Each column represents a value in wavelength, a typical spectrometer only has one value, the assessment has between 50 and 60 pixels per column existing redundancy data to compensate possible variations. Onwards, image will be plotted in 3 axis to display data better, the third axis is “z” and every value of each pixel is shown in figure 4.34. Then, each column of image is represented for a layer, slicing the image at x′ = 600 and creating a new 2D plane. All values can be checked independently, the new plane of inspection is shown in figure 4.35d. As before was shown, the original data is noisy (figure 4.35a) but it improves when the background is removed (figure 4.35b) and the result blurred (see figure 4.35c). Observing data, blur image is the smoothest wave (see figure 4.36), besides the core shape of data keeps without considerable changes. The more smooth the wave is, the better spectra is computed. Furthermore, the spectra will be also smooth and continuous without little peaks (noise). Values in the new 2D plane have a similar distribution, the aim is to choose a representative value. It is not simple because the values change a lot along the spectra and are different in every channel as well. Among the different methods to compute these values exist, four are proposed. Then, they will be compared to select the one which has less error, the methods are shown below: • Maximum value • Arithmetic mean • Weighted mean • Quadratic mean • Weighted Gaussian mean Master Thesis of Mario Eduardo Zárate Cáceres y′ [pixels] 4 Implementation 50 800 800 600 600 400 400 200 200 0 0 200 200 400 400 600 600 x ′ 800 x ′ 800 [pix 0 [ 0 e p l 1000 ix 1000 s] 1200 40 20 els 40 20 60 ] 1200 els] 60 els] ′ y [pix [pix′ y (a) (b) 800 800 600 600 400 400 200 200 0 0 200 200 400 400 600 600 x ′ 800 800 [ ′ p x ixel 1000 0 20 [pixel 1000 0 s] 1200 60 40 s 40 20 ] 1200 ixel s] 60 s] ′ [p ′ [pix el y y (c) (d) Figure 4.35: Matrix data (a) Original data from channel 6 (b) Background subtraction applied in channel 6 (c) Channel 6 smoothing with filters (d) Channel 6 cropped in x′ = 600 900 Original 800 Without Background 700 Blur(9x9) 600 500 400 300 200 100 0 10 20 30 40 50 60 y′ [pixels] Figure 4.36: Data in channel 6, layer at x′ = 600 Master Thesis of Mario Eduardo Zárate Cáceres z [counts] z [counts] z [counts] z [counts] z [counts] 4 Implementation 51 4.6.1 Arithmetic Mean This method considers all values along one column, but discard where the value is located. Ideally the highest values are in the middle of channel. It is not always like this. A representative value is resolved using equation 4.15. 1 z̄ = (z1 + z2 + z3 + · · · + zn) (4.15) n 4.6.2 Weighted Mean Also called contraharmonic mean, each value will be increased using different weights. These values depend on how far they are from the maximum point. Equation 4.16 shows the general method, where value of weight is given by equation 4.17, where is z in z ′ max. Whereby every pixel from axis y has a different weight and only one of them or maybe more, would have w = 1. ∑n i=1 wizi w1z1 + w2z2 + · · · + wnzn z̄ = ∑n = (4.16) i=1 wi w1 + w2 + · · · + wn zi wi = (4.17) zmax The value zmax is explained in figure 4.37, furthermore, if equation 4.16 is multiply by zi/zmax, it does not change values, but it brings into a new equation 4.19 which one is easier to implement. z zmax zi y yi ymax Figure 4.37: Weight in each slide per channel z1 z2 zn z1 + z2 + · · · + zn z z z̄ = max zmax zmax max z1 z2 z · (4.18) n + + · · · + zmax zmax zmax zmax Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 52 (z1) 2 + (z2) 2 + · · · + (zn)2 z̄ = (4.19) z1 + z2 + · · · + zn 4.6.3 Quadratic Mean In statistics is also known as Root Mean Square (RMS), in physics it is a characteristic of a continuously varying quantity. Although it seems not to have considerable varying, it will be interesting get the value in order to compare. This is calculated as the square root of the mean of the squares as shown in equation 4.20. √ 1 z̄ = ((z )2 1 + (z2)2 + · · · + (zn)2) (4.20) n 4.6.4 Weighted Gaussian mean This method considers the position of each pixel, it starts with the aim of values are symmetrically distributed. Thus, a pixel located in the half would have w = 1, and the rest of points would have weight proportional to its distance from the midpoint. Gaussian function is used as reference to calculate weights, as is observed in figure 4.38, yi’s value is zi, but its weight depends on its position in the slide, taking wi as weight. Average value is calculated using 4.21. z ( ) (y − b)2 fGauss(y) = a · exp − 2c2 a c wi f(y) zi y yi b Figure 4.38: Gaussian mean ∑n i=1 wizi z̄ = ∑n (4.21) i=1 wi Master Thesis of Mario Eduardo Zárate Cáceres 4 Implementation 53 Results are plotted together in figure 4.39. Arithmetic mean is the lower value because it only considers value in z of each pixel dismissing where it is situated. Re- peating it in every pixel of x′ to build a spectrum is possible; however axis is still in pixels. Whereby, equation 4.13 is enough to turn values from pixels to wavelength in nanometers. 800 Wave 700 ymax ymax:Maximum value 600 y y1:Arithmetic mean 4 y2 y2:Weighted mean 500 y3 y3:Quadratic mean 400 y1 y4:Weighted Gausssian mean 300 200 100 0 10 20 30 40 50 60 y′ [pixels] Figure 4.39: Channel 6 and its values, x′ = 600 Now all the values are in wavelength axis, “y-axis” will be renamed as “h”, which will express the quantity of pixels per each wavelength value; it is expressed in counts, it is the quantity of pixel counted. All the values computed have error among them. Arithmetic method has the lowest result, while the highest is in Maximum value as shown in figure 4.40. Qualitatively same result could be inferred from them, one peak in 460 nm, a bottom near 500 nm and a width peak between 540 nm and 600 nm, a typical distribution of white LED. The shown spectrum only represents one of the 11 channels. Hence, using equa- tion 4.13, all the channels can be displayed in wavelength axis as shown in figure 4.41. In this figure only 5 of the 11 are showed, channel 11 and channel 8 are markedly lower than the rest because some of the channels are darker than others, as a result of difference transmittance among the 11 optical fiber; however all of them should be equal. The “h” scale ideally should represent the amount of light per wavelength expressed as intensity, but this is not an easy task because it has a non-linear behavior, be- sides it would need a intensity correction. Another hurdle would be that optical fiber transmittance is not the best, hence this approach is not addressed in this work. Master Thesis of Mario Eduardo Zárate Cáceres z [counts] 4 Implementation 54 1200 Maximum Value 1100 Arithmetic mean Weighted mean 1000 Quadratic mean 900 Weighted Gaussian mean 800 700 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 4.40: White light LED spectrum in channel 6 applying different methods to compute it 1200 Channel 1 1100 Channel 4 Channel 6 1000 Channel 8 900 Channel 11 800 700 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 4.41: Channels according Gaussian Mean, in spite of they have light distribution alike, the high is not the same. They will need a rescaling in order to display spectra as similar as possible In fact the results are focused in find the distribution of light regardless the amount of intensity. Usually in applications with spectrometer the result is calculated from the spectra, often from spectra that require active lighting, like absorption or fluorescence [RCN15]. Master Thesis of Mario Eduardo Zárate Cáceres h [counts] h [counts] 4 Implementation 55 4.7 Resizing spectrum per channel The spectrum varies among channels, not in distribution, but it does in height “h”. Channel 6 and 4 are the two which have more quantity of pixels. Considering that the channel 6 is in the middle of diffraction grating, this channel is considered as the best sample due to it is within the optical path the system was made for. Usually the line detector is put where channel 6 is located now. Thus, the other channels are going to be resize having channel 6 as reference. In order to rescale every spectrum, a factor should be calculated. The best way to do it is using area scaling approach. The whole spectrum area will be considered and a proportional value among the other channels will be calculated in order to make them quite alike. First step is to find total area per channel integrating F (λ). As these are discrete values the using of summation between 380 nm and 780 nm is possible. When all areas are known An, each of them is going to be divided by the middle channel area (Ar), number 6. Then, a factor will be calculated; fn will affect each value of F (λ), increasing or decreasing waves. λ=780 ∫ λ=780 ∑ A = F (λ)dλ → F (λ) λ=380 λ=380 (4.22) An ∴ fn = → fn · F (λ) Ar 1200 Channel 1 1100 Channel 4 Channel 6 1000 Channel 8 900 Channel 11 800 700 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 4.42: Rescaled channels according Gaussian Mean Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 4 Implementation 56 There is one fn per each channel calculated from equation 4.22. Using these value is possible to resize each channel, achieving that 11 channel have the same spectrum distribution. Ideally, all of them receive the same light source and should be alike; the waves resized are shown in figure 4.42. Afterwards, the waves are closer, although there is still some error; but definitely it is less than before. This image is only a reference, because only some channels with Gaussian mean are shown. There are other methods and channels. In figure 4.43, it observes that red line represents initial size and the red area, the error. Nevertheless after resize it, error have been significantly reduced. New resize line is shown in green and the error is hatched. Similar results are achieved in other channels. The aim is to display 11 spectra as similar as possible. Due to all of them receive the same light source, select one of the methods mentioned before is important. 1200 Channel 6 1100 Channel 8 Channel 8 Rescaled 1000 900 800 700 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 4.43: Error between original and rescaled image The error showed before can be quantified and expressed as pixels2, once all methods in every channel were evaluated, they can be compared as shown in figure 4.44. Channel 6 has not error due to it is considered as reference. The method which has the lowest error in almost all channels is “Arithmetic mean” due to this mean does not consider the relevance that each pixel has along the channel; in other words, the distance from the middle point where ideally is the highest point. Hence, the other methods could show better results because they consider position of each pixel. “Quadratic Mean”, “Weighted mean” and “Gaussian” have almost similar error. Then, to consider any of them makes no difference. Using a kiviat diagram all the result can be compared as Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 4 Implementation 57 shown in figure 4.44. It allows to opt for the best method, the worst is “Maximum value” because it has the highest error in all methods among channels. “Gaussian Mean” will be considered as the best option because it grants a weight depending of its position. Points close to half will be considered more than others. Sometimes data is lost owing to external reason which are hard to control like optical fiber transmittance, field curvature; as result of using diffraction grating, among others. Final assessments will corroborate whether the results can be considered as good or merely bad. Nevertheless, factor fn, which resize the channels, will be calculated using “Arith- metic method” because it produces the lowest error. Resize values will be calculated only once, during calibration and using white light source. Besides fn will be saved in a file named “scale.data” as shown in table 4.5. The next runs will use previous data. Channel 4 Channel 3 Channel 5 Channel 2 x104[pixels2] Channel 6 5 4 3 2 1 Channel 1 Channel 7 Channel 11 Maximum Values Arithmetic Mean Channel 8 Weighted Mean Quadratic Mean Channel 10 Channel 9 Gaussian Mean Figure 4.44: Kiviat diagram shows the error in pixels2 per channel and method, channel 6 is con- sidered as reference Table 4.5: “scale.data” file is the last calibration file which contains the factor f to rescale channels and they will be saved and loaded in every use Channel f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 value 1.51 0.78 0.87 0.99 1.04 1.00 1.60 2.51 1.48 1.70 3.25 The complete code, which all these steps are included, is attached as “Test_image” in folder “Program” in CD-ROM. Master Thesis of Mario Eduardo Zárate Cáceres Chapter 5 Experimental results In order to grant suitable results, these will be compared with others yielded by a Ocean Optic Spectrometer. As every spectrometer, it needs to be calibrated as well. It relies on previous data which was stored using reference light source (white led). Once calibration files were stored, the system will use them in order to know exactly where each channel is and display spectra from image data (11 channels). 5.1 Calibration process Periodically the spectrometer should be recalibrated to overcome mechanical assembly and current environmental conditions. The aim is to store the position of channels, key points to compute spectra and factors to rescaling each channel. An uniform light source will be needed. Sunlight would be the best option, but considering that having a uniformly sunlight intensity is not always possible, it would be better to have another controllable source. Thus, white light emitting diode (LED) is recommended to this calibration process. 1. Getting exposure-time, place the optical in front of the white light source as reference, when exposure-time becomes steady, fix the time 2. Take reference image, take an image of LED white light (see figure 5.1a) 3. Take background image, turn off light to take a background image 5 Experimental results 59 4. Detect wavelength in image and save data, provide different images with external sources whose wavelength are known (diode laser). Their light distribution should be narrow (±10 nm). This process should be based in steps 1, 2 and 3. First place the optical fiber in front of light source, once the exposure-time is steady take an image, turn off light source and take another one 5. First run, calibration, program will use the previous image to create calibration files which are useful to disjoin the channels and yield spectra 6. Take inspection image, the step 1, 2 and 3 must be always done; although, if background image was saved and exposure-time fixed, inspection image would be enough to display spectrum of each channel Meanwhile reference image is stored, 11 channels will be decomposed as is shown in figure 5.1b. Afterwards, it is easy to use. Place fiber optic in front of light source to inspect, acquire an image and spectra will be displayed. (a) (b) Figure 5.1: (a) White LED as light source (b) Image acquired and decomposed 5.2 Displaying results In some technique, such as absorbance or fluorescence, wavelength position values in “x- axis” are more important than light intensity. The aim is to display spectra, as data has already been acquired, spectra can be computed. This is a multi-channel spectrometer, hence 11 spectra can be shown. If all of them are tested using the same light source, spectra should be alike. Nevertheless, due to hurdles already mentioned exists some differences among them. The following graphics will have wavelength, λ [nm] as “x- axis” and “high [counts]” as “y-axis”. Master Thesis of Mario Eduardo Zárate Cáceres 5 Experimental results 60 Assessments of the whole developed method had good results, evidence of it is fig- ure 5.2. In this graphic spectra from LED white light is shown. Also some differences among them are observed, channel 2 is the lower one, whereas the rest of them look very similar. However, in all of them the light distribution is exactly the same, there are two peaks, one in 460 nm and the other between 530 nm and 590 nm. Towards the end, this results will be compared with a standard spectrometer. 1200 Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 5.2: Spectra of 11 channels using LED white light 5.3 Standard spectrometer as reference Results are apparently good, now to compare with a reference to yield reliable result is desired. The aim now is to use a standard spectrometer to get spectra and compare it with the results yielded before. The equipment is a high resolution Spectrometer from Ocean Optics. Its model is HR2000+ High-Speed Miniature Fiber Optic which provides optical resolution as good as 0.035 nm (FWHM) and is illustrated by figure 5.3a. It requires to be connected to a notebook or desktop PC via USB port or serial. When it is connected to the PC it does not need an external power supply. It is useful in chemistry and biochemistry applications. A diagram of how light moves through the optical bench of an HR2000+ Spectrom- eter is shown in figure 5.3. The optical bench has no moving pars that can wear or break. All the components are fixed. Each component is enumerated and they are explained below [Opt10, p. 16]. Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 5 Experimental results 61 (a) (b) Figure 5.3: (a) HR2000+CG-UV-NIR Spectrometer (b) HR2000+ Spectrometer with Components [Opt10, p. 15] 1. SMA connector, secures the input fiber to the spectrometer 2. Slit, the size of aperture regulates the amount of light that enters the optical bench and control spectral resolution 3. Filter, restricts optical radiation to pre-determined wavelength regions 4. Collimating Mirror, focuses light entering the optical bench towards the grating of the spectrometer 5. Grating, diffracts light from the Collimating Mirror and directs the diffracted light onto the Focusing Mirror 6. Focusing Mirror, receives light reflected from the Grating and focuses the light onto the CCD Detector 7. Collection lens, optional component that attaches to the CCD Detector. It fo- cuses light from a tall slit onto the shorter CCD detector elements 8. CCD Detector, collects the light received from the Focusing Mirror or L2 Detector Collection Lens and converts the optical signal to a digital signal Ocean optics furnishes Spectra Suite software, a Java-based spectroscopy software platform that operates on Windows, Macintosh and Linux operating systems and the last version was released in 2011. In their web page there are new versions and softwares available. Once the software was installed in the computer, the device should be connected and it will be recognized. Then spectrometer is connected via port USB and the optical fiber is connected to spectrometer. Spectrometer will pass the sample information to the operating Software and immediately it will display the spectrum. If it does not display spectrum, integration time should be regulated. Normally increasing Master Thesis of Mario Eduardo Zárate Cáceres 5 Experimental results 62 this value until the spectrum is big enough. The user manual also recommends to store a dark measurements (which represents background) in order to display only interaction spectrometer with light source. The result is displayed on the screen as is shown in figure 5.4 and represents the LED white light distribution. Figure 5.4: Ocean Optics Spectra Suite The difference between this spectrometer and the one which was used in this the- sis are evident in table 5.1. Ocean optic uses a CCD array sensor Sony which has 2048 pixels (see figure 5.5a), whereas CMOS matrix sensor is used in this thesis. Table 5.1: Characteristics of both spectrometers Parameters Ocean Optics [Opt10] Proposed device Model HR2000+CG-UV-NIR MQ013MG-E2 [Xim13] CCD array CMOS matrix Detector Sony ILX-511B linear silicon EV76C560 1.3 Mpixels B&W No. of elements 2048 pixels 1280x1024 pixels Sensitivity 41 photons per counts at 600 nm 6600 LSB10/(Luxs) Pixel size 14 µm x 200 µm 5.3 µm x 5.3 µm ADC resolution 14 bits 10 bits Power consumption 220 mA @5V DC 60.6 mA @3.3V DC Detector range 200 − 1100 nm 380 − 900 nm Gratings 14 gratings available 1 grating Fiber Optic ∅2 mm x 1 + Lens (SMA 905) ∅0.3 mm x 11 Entrance aperture 5, 10, 25, 100 or 200 µm wide slits size of the entrance (∅0.3 mm) Optical resolution 0.035 nm (FWHM) ≈ 30 nm (FWHM) Moreover linear array sensors have bigger pixels and are more sensitive than matrix sensor. Sony has 14-bits even more resolution, whereas Ximea only 10-bits. The range in Ocean Optics depends on the entrance aperture (200 − 1100 nm) and, as this is an array detector, some optics elements are needed (rectangular slit, collimating mirror Master Thesis of Mario Eduardo Zárate Cáceres 5 Experimental results 63 and focusing mirror), they increase and improve the resolution, but having redundancy in data is not possible. The range using matrix sensor depends on mechanical assembly and sensor area (≈ 380 nm−900 nm). The split is a piece which contains a rectangular aperture, it regulates the amount of light that enters the optical bench as is shown in figure 5.5c. In the device, which is used in this thesis, does not exist a slit properly said. The optical fiber itself does the slit function and leads light directly to grating and the diffraction is received by matrix sensor. (a) (b) (c) (d) Figure 5.5: (a) CCD array, Sony ILX-511B linear silicon [Ham15a] (b) CMOS matrix, EV76C560 1.3 Mpixels B&W [Xim15a] (c) Rectangular aperture, the size of the aperture regulates the amount of light that enters and control spectral resolution [RCN15, Fig. 6a] (d) The diameter of the optical fiber determines the size of the entrance aperture [RCN15, Fig. 6b] The optical fiber only has ∅0.3 mm, it is one made of plastic (see figure 5.5d) without lens in the extremes. These do not have special characteristics and are available in market. This standard features will decrease accuracy in final results (≈ ±30 nm). Although the standard spectrometer has characteristics which make it better, many applications do not requires high quality such as absorbance or color measurement.Same results can be achieved using the system explained. Finally, using this spectrometer many samples were made. The results obtained using the same light source in both spectrometers will be compared in next chapter. Master Thesis of Mario Eduardo Zárate Cáceres Chapter 6 Discussion 6.1 Validation of results To validate the obtained results, both standard spectrometer and multi-channel spec- trometer will be compared using different light sources. The final result of the described process, as the spectrometer have 11 channels, is to display 11 spectra. This spectra will be validated comparing with the results using standard spectrometer. The dif- ferences between them were already explained, but now their results (spectra) will be displayed. Both will need to be connected via USB with the PC. The light source should be put in front of optical fiber, which is the light entrance. The first test will be the reference light that was used during all the calibration process. LED white light is compared in figure 6.1, here only the channel 6 is compared, because it is considered the best sample, since it is in the middle, where normally a linear detector is located. The spectra are quiet similar; however, Ocean Optic Spectrometer due to high reso- lution detects some little peaks which are result of external noise. The spectrum from channel 6 does not detect these peaks, but the main shape (white LED) is alike. The error area between these two results is shaded. Near 620 nm the spectrum from channel 6 has a little elevation produced by some external disturbance (red LED from image sensor Ximea). The characteristic peaks from white LED is displayed in both cases, as well as the valley near 500 nm. Considering this test as a measurement color, the spectrum yielded by matrix sensor furnishes the relevant information to realize that this is a white LED. 6 Discussion 65 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 6 1000 900 800 700 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.1: Spectra from white light source are compared with a obtained reference using Spectrom- eter HR2000+CG-UV-NIR The same comparison was repeated, but this opportunity the 11 channels are dis- played in figure 6.2. All of them have the same distribution, in spite of they do not have the same high [counts], the shape is enough to check that these are spectra from white LED. 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.2: White light spectra from 11 channels are compared with Spectrometer HR2000+CG- UV-NIR Master Thesis of Mario Eduardo Zárate Cáceres h [counts] h [counts] 6 Discussion 66 Afterwards, the inspection of different light sources will be shown, using three LEDs, red, green and blue, spectra yielded are shown in figure 6.3. The combination of these three colors produces white color. If the light is diffracted into several beams, the spectra evidence 3 colors separately. This test was made using all possible combinations with this 3 inputs and results are appended in appendix C. 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.3: (Red + Green + Blue) Led spectra from 11 channels are compared with a obtained reference using Spectrometer HR2000+CG-UV-NIR During calibration process, mainly two sources were used, green and red diode laser, these were used because its wavelength are known and their spectra are narrow. Using the standard spectrometer, it was confirmed and now it can be compared using matrix sensor (see figure 6.4). In case of green diode laser, spectra fit in the same value (532 nm). Although, some of them look wider than others. Moreover, the intensity level, displayed only as counts, do not reach the same value, despite the same light source was used. A similar sample was done using red laser and also blue laser (see appendix D). The last one was not used in calibration process, due to in the lower channels it does not fit within the range (410 nm − 870 nm), so it is not helpful. The ranges per channel were calculated and displayed in table 4.4. One of the advantages of using multi-spectral channel, in contrast to standard spec- trometer, is the ease to inspect 11 different inputs instead of only one. Promptly, taking advantage of it, many spectra are going to be checked and compared with the reference acquired using standard spectrometer. Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 6 Discussion 67 1200 Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 532nm± 10 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 Ocean Optic HR2000+CG-UV-NIR 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.4: Green laser diode (532 nm) spectra from 11 channels are compared with a obtained reference using Spectrometer HR2000+CG-UV-NIR Fluorescence spectroscopy is used for a wide variety of biomedical purposes [Lak13]. Using a light beam (ultraviolet light UV), electrons in molecules of certain compounds are excited and it causes them to emit light, not always visible light. The spectrometer using a diffraction grating isolates the incident light and fluorescent light. Fluorescence spectroscopy is used in, among others, biochemical, medical, and chemical research fields for analyzing organic compounds [Wik15a]. (a) (b) Figure 6.5: Dyomics Dye Marker (a) Liquids with natural light (b) When the fluids receive Ultra- violet light (UV), these have a emission in a certain wavelength, optical fiber of both spectrometer are located in front of bottles to inspect their behavior Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 6 Discussion 68 In order to see the result that the multi-channel spectrometer yield, some fluorescent dyes will be tested. These are in bottles inside a black box to avoid external light. All of them look very similar, except the two ones in the middle which have different colors (see figure 6.5a), even so, knowing their exactly color is not possible. But when they receive ultraviolet light (UV) as is shown in figure 6.5b, they react and emit light in a certain wavelength. This fluorescence test were made using the next elements, from left to right respectively: • Orange excitation (DY-605), emission max at 624 nm [Dyo15d] • UV-Megastokes (DY-350XL), emission max at 610 nm[Dyo15a] • Rhodamine 6G (RH-6G), emission max at 566 nm [Wik15b] • UV-Megastokes (DY-370XL), emission max at 473 nm [Dyo15b] • Pyranine (HPTS), emission max at 511 nm [Sab15, p. 253] • Orange Excitation (DY-594), emission max at 615 nm [Dyo15c] When fluorescent dyes are excited, they emit light in a certain wavelength and they can be seen using the multi-channel spectrometer. Figure 6.6 evidences that the result are good. Although the high is not similar, it can recognize the emittance value near 620 nm. Whether intensity is not good enough, this is due to the dyes markers, because the standard spectra achieved results alike. The results from the other fluids are appended in appendix E. 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 7 +UV Channel 10 1000 900 800 700 Absorption/emission max: ≫ 600nm/624nm (in Ethanol) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.6: Absorption/emission test using Dynomics Dye Marker [Dyo15e] Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 6 Discussion 69 A final test was made using a calibration light source, the HG-1 Mercury Argon lamp [Opt16b]. This is used by Ocean Optic to calibrate their spectrometers. It has strong mercury and argon emission lines (nm), the narrow waves are compared in figure 6.7. 1200 Ocean Optic HR2000+CG-UV-NIR 1100 404.66nm 1000 579.07nm 900 772.40nm 546.08nm 800 763.51nm 700 600 750.39nm 407.78nm 727.29nm 500 576.96nm 435.84nm 400 738.40nm 696.54nm 300 200 706.72nm 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Channel 7 Channel 8 Channel 9 Channel 10 Channel 11 Figure 6.7: Spectra using HG-1 Mercury Argon Calibration Light source manufactured by Ocean Optics, isolated channels are attached in CD The last figure is evidence that the results obtained are accurate, despite they do not have height alike. The position and wave distribution are similar. Hence, comparing the wide waves, the system has an estimated resolution of 30 nm and an measurement uncertainty of ±5 nm. More results are in folder “Results”, attached in a CD-ROM. 6.2 Considerations and limitations The system has some factors which reduce the accuracy. As was mentioned before, optical fiber plays the slit function and how it does not limit light entrance, the spec- trum diffracted loses precision. Thus, displaying lines or narrow waves as is shown in figure 6.8 will not be possible. This would be the minimum size of a received spot. In this figure, the spot is produced in 546.08 nm by calibration light source. So, if this narrow line produces this spot, it will be the minimum detectable value. Master Thesis of Mario Eduardo Zárate Cáceres h [counts] 6 Discussion 70 x [pixels] 0 200 400 600 800 1000 1200 200 400 600 800 1024 Figure 6.8: Acquired image using calibration light source, HG-1 Mercury Argon lamp The commonly used optical fiber in spectrometer is wider and have lens in extremes as well as slit to limit light entrance into the optic elements. The amount of light, which arrives to diffraction grating, depends on transmittance of optical fiber. The 11 used fibers do not present same transmittance. Thus, the channels do not receive same light intensity. Due to transmittance problems, to use it with samples which has good intensity is recommended. The system will find the best exposure-time to acquire an image. Other problem could occur when the image sensor is assembled. In contrast to conventional spectrometer which have many optic elements, this only presents image sensor and diffraction grating, even so, this two elements should be fixed to avoid mechanical problems. It could be produced due to the screws (see figure 6.9) have free space and if it is not adjust, the sensor will move. It will change the field of view (FOV) and disable the calibration. Hence, the sensor should be fixed to the plastic base to avoid it. Number of channels is limited by height image (1280 pixels) and the range wavelength by width image (1024 pixels). Diffraction grating is Image Sensor within black element Screw Figure 6.9: Mounted sensor on diffraction grating, if the sensor it is not well adjusted, it will produce clearance which could displace image and wavelength range Master Thesis of Mario Eduardo Zárate Cáceres y [pixels] 6 Discussion 71 Optical fiber Figure 6.10: LEDs disposition in Ximea sensor [Xim13, p. 37] The process includes background subtraction, but firstly the image which sensor receive should be completely dark. The used sensor has 3 LEDs which are indicated in figure 6.10. Their functions are Status 1, Status 2 and Power. Two of the three LEDs can be turned off setting the next parameter (XI_PRM_LED_SELECTOR) in Ximea hardware, but the power LED always is On. It is a constant interference because it is always present in images. This interference should be discarded considering as part of background. If ideally the background had been dark, their spectra would have been like the spectra shown in figure 6.11. The bump presents approximately in 590 nm represents the mentioned power LED. The rest keeps constant around a value, it is produced by the background which is not completely dark. It can not be compared with a reference because the power LED affects directly the image sensor. 1200 Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Figure 6.11: Background spectra, the bumps near 600 nm are produced by the power LED which could ruin final results Master Thesis of Mario Eduardo Zárate Cáceres h [counts] Chapter 7 Summary and Outlook Reduce noise from the acquired image is possible subtracting the background, obviat- ing hot pixels and using filters through image processing. The result yield necessary information to compute spectra. The exposure-time is an important feature and the inspected image and background must always have the same exposure-time value. Image region which contains spectra information can be found without many hurdles whether a suitable white light source is used. Then using the data, the spectra can be computed and displayed. Thus, make a multi-channel spectrometer is a feasible task since a matrix sensor is utilized. The results show that a multi spectral imaging can show wavelength between 400 nm and 800 nm with an estimated resolution of 30 nm. It is not very accurate; although, it has an estimated precision of ±5 nm. Nevertheless, it is enough for some applications such as fluorescence or LEDs measurement. This feature can be enhanced with optical fibers made of glass and using lens at the end, although it could increase initial price. In future works the developed code could be improved to reduce run-time and display spectra in real time while the light source is inspected. Afterwards, the algorithm can be improved and tested in mobile phones, taking advantages from camera’s sensor. The calibration process was done with the assumption that the wavelength has a lin- ear distribution. Another approach could be applied, such as quadratic, cubic, among others. These assessments can confirm if the first approach was good enough or refute it with better results. Bibliography [AR05] Acharya, Tinku ; Ray, Ajoy K.: Image processing: principles and appli- cations. John Wiley & Sons, 2005 [BM12] Burgess, Christopher ; Mielenz, KD: Advances in Standards and Methodology in Spectrophotometry. Bd. 2. Elsevier, 2012 [Bou88] Bourke, Paul: Calculating the area and centroid of a polygon. (1988), July. http://paulbourke.net/geometry/polygonmesh/ [BQGZ11] Bi, Yunfeng ; Qi, Fujun ; Guo, Jinjia ; Zheng, Ronger: The spectral image acquisition and processing system based on folded gratings spectrograph and ICCD. In: Proceedings of 2011 International Conference on Electronic and Mechanical Engineering and Information Technology, EMEIT 2011 3 (2011), S. 1332–1335. http://dx.doi.org/10.1109/EMEIT.2011. 6023340. – DOI 10.1109/EMEIT.2011.6023340. ISBN 9781612840857 [DSW+15] Das, Anshuman ; Swedish, Tristan ; Wahi, Akshat ; Moufarrej, Mira ; Noland, Marie ; Gurry, Thomas ; Aranda-Michel, Edgar ; Aksel, Deniz ; Wagh, Sneha ; Sadashivaiah, Vijay ; Zhang, Xu ; Raskar, Ramesh: Mobile phone based mini-spectrometer for rapid screening of skin cancer. In: Proc. SPIE 9482 (2015), 94820M-94820M-5. http://dx. doi.org/10.1117/12.2182191. – DOI 10.1117/12.2182191 [EPT+14] Ernst, D. ; Peyer, M. ; Täschler, D. ; Steiner, Patrick ; Bossen, A. ; Považay, B. ; Meier, Ch.: Multi-channel near-infrared spectrometer for functional depth-resolved tissue examination and positioning applications. In: Proc. SPIE 8938 (2014), 89380J-89380J-10. http://dx.doi.org/ 10.1117/12.2036381. – DOI 10.1117/12.2036381 Bibliography 74 [Goe09] Goetz, Alexander F.: Three decades of hyperspectral remote sensing of the Earth: A personal view. In: Remote Sensing of Environment 113 (2009), S. S5–S16 [GW08] Gonzalez, R.C. ; Woods, R.E.: Digital Image Processing. Pear- son/Prentice Hall, 2008 https://books.google.de/books?id= 8uGOnjRGEzoC. – ISBN 9780131687288 [GYM07] Garini, Yuval ; Young, Ian T. ; McNamara, George: Spectral Imag- ing: Principles and Applications. In: Cytometry. Part A : the journal of the International Society for Analytical Cytology 71 (2007), Nr. 1, S. 8– 15. http://dx.doi.org/10.1002/cyto.a. – DOI 10.1002/cyto.a. – ISBN 5092001224 [HCCJ15] Hossain, Md. A. ; Canning, John ; Cook, Kevin ; Jamalipour, Ab- bas: Portable smartphone optical fibre spectrometer. In: Proc. SPIE 9634 (2015), 963411-963411-4. http://dx.doi.org/10.1117/12. 2195306. – DOI 10.1117/12.2195306 [Lak13] Lakowicz, Joseph R.: Principles of fluorescence spectroscopy. Springer Science & Business Media, 2013 [LHW+13] Li, Qingli ; He, Xiaofu ; Wang, Yiting ; Liu, Hongying ; Xu, Dongrong ; Guo, Fangmin: Review of spectral imaging technology in biomedical engineering: achievements and challenges. In: Journal of biomedical optics 18 (2013), Nr. 10, 100901. http://dx.doi.org/10.1117/1.JBO. 18.10.100901. – DOI 10.1117/1.JBO.18.10.100901. – ISBN 8621543451 [MATN06] Manuilskiy, A. ; Andersson, H. A. ; Thungström, G. ; Nils- son, H. E.: Multi channel array interferometer-fourier spectrome- ter. In: 2006 Northern Optics Conference Proceedings, NO (2006), S. 1–6. http://dx.doi.org/10.1109/NO.2006.348363. – DOI 10.1109/NO.2006.348363. ISBN 1424404355 [Mei69] Meister, Albrecht Ludwig F.: Generalia de genesi figurarum planarum et inde pendentibus earum affectionibus. 1769 [MZ99] Madsen, Christi K. ; Zhao, Jian H.: Optical filter design and analysis. Wiley-Interscience, 1999 [Opt10] Optics, Ocean ; Ocean Optics, Inc. (Hrsg.): HR2000+ Spectrometer Installation and Operation Manual. v1.0. 830 Douglas Ave. Dinedin, FL 34698 USA: Ocean Optics, Inc., 2010. http://oceanoptics.com/ wp-content/uploads/hr2000-.pdf Master Thesis of Mario Eduardo Zárate Cáceres Bibliography 75 [Pra07] Pratt, William K.: Digital Image Processing. John Wiley and Sons, Inc., 2007 [RCN15] Reetz, Edgar ; Correns, Martin ; Notni, Gunther: Cost effective spec- tral sensor solutions for hand held and field applications. In: SPIE Optical Metrology (pp. 95253J-95253J). International Society for Optics and Pho- tonics. (2015). http://dx.doi.org/10.1117/12.2184707. – DOI 10.1117/12.2184707 [RMB+06] Rosenberger, Maik ; Margraf, Jörg ; Brücknerl, Peter ; Töpferl, Susanne ; Linß, Gerhard: Monolithic Miniature Spectral Sensor for Multi- Channel Spectral Analysis. 00 (2006), S. 398–403 [Sab15] Sabnis, Ram W.: Handbook of Fluorescent Dyes and Probes. John Wiley & Sons, 2015 [SES10] Starr, Cecie ; Evers, Christine ; Starr, Lisa: Biology: Concepts and Applications Without Physiology. Cengage Learning, 2010 [S.S85] S.Suzuki, K.Abe.: Topological structural analysis of digital binary image by border following. In: Cvgip 46 (1985), S. 32–46 [Tou83] Toussaint, Godfried: Solving geometric problems with the rotating calipers. In: Ieee Melecon83 (1983), Nr. May, 1–8. http://dx.doi. org/10.1.1.40.2140. – DOI 10.1.1.40.2140. – ISSN 09507671 [Xim13] Ximea ; Ximea GmbH (Hrsg.): xiQ USB 3.0 Cameras Series. Ver- sion 1.02. Hansestraße 81, 48165 Münster, Germany: Ximea GmbH, July 2013. http://www.jmakautomation.com/documents/xiQ_ TechnicalManual_v1.02.pdf [ZN42] Ziegler, J. G. ; Nichols, N.B.: Optimum Settings for Automatic Con- trollers. In: Transacction of the A.S.M.E 64 (1942), S. 759–768. http: //dx.doi.org/10.1115/1.2899060. – DOI 10.1115/1.2899060. – ISSN 0192303X Master Thesis of Mario Eduardo Zárate Cáceres Internet Sources [Ass16] Association, The Fiber O.: Optical Fiber. http://www.thefoa. org/tech/ref/basic/fiber.html. Version: 2016. – Accessed: 24.02.2016 [BWT15a] BWTek: Choosing a Fiber Optic. http://bwtek.com/ spectrometer-part-6-choosing-a-fiber-optic/. Version: 2015. – Accessed: 08.03.2016 [BWT15b] BWTek: Diffraction Grating. http://bwtek.com/ spectrometer-part-2-the-grating/. Version: 2015. – Ac- cessed: 08.03.2016 [BWT15c] BWTek: The Optical Bench. http://bwtek.com/ spectrometer-part-4-the-optical-bench/. Version: 2015. – Accessed: 08.03.2016 [Dev15] Devices, Nano O.: Nano-Stick Spectrometer. http://www. nanoopticdevices.com/#!spectrometers/c1ylt. Version: 2015. – Accessed: 25.02.2016 [Dyo15a] Dyomics: DY-350XL. http://www.dyomics.com/en/products/ uv-megastokes/dy-350xl.html. Version: 2015. – Accessed: 17.02.2016 [Dyo15b] Dyomics: DY-370XL. http://www.dyomics.com/en/products/ uv-megastokes/dy-370xl.html. Version: 2015. – Accessed: 17.02.2016 [Dyo15c] Dyomics: DY-594. http://www.dyomics.com/en/products/ orange-excitation/dy-594.html. Version: 2015. – Accessed: 17.02.2016 [Dyo15d] Dyomics: DY-605. http://www.dyomics.com/en/products/ orange-excitation/dy-605.html. Version: 2015. – Accessed: 17.12.2015 [Dyo15e] Dyomics: Products. http://www.dyomics.com/en/products/. Version: 2015. – Accessed: 15.02.2016 Internet Sources 77 [E2V15] E2V: Sapphire 1.3M - EV76C560. http://www.e2v.com/products/ imaging/cmos-sensors/ev76c560/. Version: 2015. – Accessed: 17.12.2015 [Ham15a] Hamamatsu: Advances in CMOS image sensors open doors to many applications. http://www.hamamatsu.com/sp/hc/osh/osh_013_ 002_figure02.jpg. Version: 2015. – Accessed: 16.02.2016 [Ham15b] Hamamatsu: Mini-spectrometers. http://www.hamamatsu.com/ us/en/4016.html. Version: 2015. – Accessed: 25.02.2016 [Kai06] Kaiser, Peter: Electromagnetic Spectrum. http://www.yorku.ca/ eye/spectru.htm. Version: 2006. – Accessed: 25.02.2016 [Las15a] Lasers, Berlin: Green Laser. http://www. berlinlasers.com/media/catalog/product/cache/ 1/image/9df78eab33525d08d6e5fb8d27136e95/5/1/ 515nm-green-laser-diode-module-1.jpg. Version: 2015. – Accessed: 19.01.2016 [Las15b] Lasersystems, RGB: Qstick - The World’s first USB stick Spectrome- ter. http://www.rgb-laser.com/content_products/product_ qstick.html. Version: 2015. – Accessed: 25.02.2016 [Las16] Lasers, Berlin: 650nm Red Laser Diode module. http: //www.berlinlasers.com/650nm-red-laser-diode-module. Version: 2016. – Accesed:19.02.2016 [Mig15] Mightex: Multi-Channel CCD Spectrometers. http: //www.mightexsystems.com/family_info.php?cPath= &categories_id=196. Version: 2015. – Accessed: 25.02.2016 [Nor15] Normlicht, JUST: GL Optic SPECTIS 1.0 (mini-spectrometer). http://www.just-normlicht.de/uk/articlelist.html?id= GL%20Optic. Version: 2015. – Accessed: 25.02.2016 [Oly16] Olympus: Birth of Fiberscopes. http://www.olympus-global. com/en/corc/history/story/endo/fiber/. Version: 2016. – Ac- cessed: 24.02.2016 [Ope15a] OpenCV: OpenCV 2.4.13.0 documentation, Image Filtering. http://docs.opencv.org/2.4/modules/imgproc/doc/ filtering.html?highlight=blur#cv2.blur. Version: 2015. – Accessed: 15.12.2015 Master Thesis of Mario Eduardo Zárate Cáceres Internet Sources 78 [Ope15b] OpenCV: Operation on Arrays. http://docs.opencv.org/ 2.4/modules/core/doc/operations_on_arrays.html# voidphase(InputArrayx,InputArrayy,OutputArrayangle, boolangleInDegrees). Version: 2015. – Accessed: 10.11.2015 [Ope15c] OpenCV: Structural Analysis and Shape Descriptors. http:// docs.opencv.org/2.4/modules/imgproc/doc/structural_ analysis_and_shape_descriptors.html?highlight= moments#moments. Version: 2015. – Accessed: 12.11.2015 [Opt15a] Optics, Edmund: Bandpass Filters. http://www.edmundoptics. com/optics/optical-filters/bandpass-filters/. Version: 2015. – Accesed: 10.01.2016 [Opt15b] Optics, Ocean: STS-VIS Vis Spectral Analysis in a Tiny Footprint. http://oceanoptics.com/product/ sts-vis-microspectrometer/. Version: 2015. – Accessed: 25.02.2016 [Opt15c] Optics, Ocean: USB Series Spectrometers. http://oceanoptics. com/product-category/usb-series/. Version: 2015. – Accessed: 25.02.2016 [Opt16a] Optics, Ocean: Absorbance. http://oceanoptics.com/ measurementtechnique/absorbance/. Version: 2016. – Accessed: 15.02.2016 [Opt16b] Optics, Ocean: HG-1 Calibration Source Mercury Argon Calibration Source. http://oceanoptics.com/product/hg-1/. Version: 2016. – Accesed: 17.02.2016 [PSW16] Problem Solving Wiki, Art of: Shoelace Theorem. http: //www.artofproblemsolving.com/wiki/index.php?title= Shoelace_Theorem. Version: 2016. – Accessed: 01.02.2016 [Roc16] Rockwell, Ken: Hot pixels. http://kenrockwell.com/tech/ hot-pixels/index.htm. Version: 2016. – Accessed:24.02.2016 [SPE16] SPECIM: IMSPECTOR Multipoint Spectrometers. http: //www.specim.fi/products/multipoint-spectrometers/. Version: 2016. – Accessed: 25.02.2016 Master Thesis of Mario Eduardo Zárate Cáceres Internet Sources 79 [Tho14] ThorLabs: Compact CCD Spectrometers. https://www.thorlabs. com/newgrouppage9.cfm?objectgroup_id=3482&gclid= CPbG9s_akMsCFeISwwodR1cAfg. Version: 2014. – Accessed: 25.02.2016 [Tub11] Tube, EXFO: Diffraction Grating - EXFO animated glossary of Fiber Optics. https://www.youtube.com/watch?v=SO7ZlMJv5ZM. Version: 2011. – accesed: 18.02.2016 [Wik15a] Wikipedia, the free e.: Fluorescence spectroscopy. https: //en.wikipedia.org/wiki/Fluorescence_spectroscopy. Version: 2015. – Accessed: 17.02.2016 [Wik15b] Wikipedia, the free e.: Rhodamine 6G. https://en.wikipedia. org/wiki/Rhodamine_6G. Version: 2015. – Accessed: 17.02.2016 [Wik16a] Wikipedia, the free e.: Exposure. https://en.wikipedia.org/ wiki/Exposure_(photography)#cite_ref-23. Version: 2016. – Acessed: 24.02.2016 [Wik16b] Wikipedia, the free e.: Full width at half maximum. https: //en.wikipedia.org/wiki/Full_width_at_half_maximum. Version: 2016. – Accesed: 18.02.2016 [Wik16c] Wikipedia, the free e.: Diffraction grating. https://en.wikipedia. org/wiki/Diffraction_grating. Version: 2016. – Accesed: 18.02.2016 [Wik16d] Wikipedia, the free e.: Optical Filter. https://en.wikipedia.org/ wiki/Optical_filter. Version: 2016. – Accesed: 18.02.2016 [Wik16e] Wikipedia, the free e.: Optical spectrometer. https://en. wikipedia.org/wiki/Optical_spectrometer. Version: 2016. – Accesed 18.02.2016 [Xim15a] Ximea: Board level cameras - USB3 Vision. http://www. lambdaphoto.co.uk/media/catalog/product/cache/1/ small_image/200x/9df78eab33525d08d6e5fb8d27136e95/ b/r/brd.jpg. Version: 2015. – Accessed: 12.11.2015 [Xim15b] Ximea: MQ013MG-E2-BRD. http://www.ximea.com/en/ products/usb3-vision-standard-designed-cameras-xiq/ board-level/mq013mg-e2-brd. Version: 2015. – Accessed: 17.12.2015 Master Thesis of Mario Eduardo Zárate Cáceres Internet Sources 80 [Xim15c] Ximea: Support OpenCV. https://www.ximea.com/support/ wiki/vision-libraries/OpenCV. Version: 2015. – Accessed: 17.12.2015 [Xim15d] Ximea: xiAPI. http://www.ximea.com/support/wiki/apis/ XiAPI. Version: 2015. – Accessed: 20.11.2015 [Xim15e] Ximea: xiAPI Manual, 2015. https://www.ximea.com/support/ wiki/apis/XiApi_Manual. – Accessed: 12.11.2015 [Xim15f] Ximea: XIMEA API Software Package. http://www.ximea. com/support/wiki/apis/XIMEA_API_Software_Package. Version: 2015. – Accessed: 10.11.2015 Master Thesis of Mario Eduardo Zárate Cáceres 81 Appendix Master Thesis of Mario Eduardo Zárate Cáceres EV76C560 Appendix A: Datasheet sensor EV76C560 1. Typical Performance Data Table 1-1. Typical electro-optical performance @ 25°C and 65°C, nominal pixel clock Parameter Unit Typical value Resolution pixels 1280 (H) × 1024 (V) mm 6.9 (H) × 5.5 (V) - 8.7 (diagonal) Image size inches ≈ 1/1.8 Pixel size (square) µm 5.3 × 5.3 Sensor characteristics Aspect ratio 5 / 4 Max frame rate fps 60 @ full format Pixel rate Mpixels / s 90 -> 120 Bit depth bits 10 @ TA 25°C @ TA 65°C Dynamic range (1) dB >62 >57 Qsat ke- 12 SNR Max dB 41 39 Pixel MTF at Nyquist, λ=550 nm % 50 performance Dark signal (2) LSB10/s 24 420 DSNU(2) LSB10/s 6 116 PRNU (3) (RMS) % <1 Responsivity(2) (4) LSB10/(Lux.s) 6600 Power supplies V 3.3 & 1.8 Electrical interface Power consumption: Functional (5) mW < 200 mW Standby µW 180 1. In electronic rolling shutter (ERS) mode. 2. Min gain, 10 bits. 3. Measured @ Vsat/2, min gain. 4. 3200K, window without AR coating, IR cutoff filter BG38 2 mm. 5. @ 60 fps, full format, with 10 pF on each output. Figure 1-1. Spectral response and quantum efficiency 2 1005B–IMAGE–11/10/11 e2v semiconductors SAS 2011 Appendix B: Optical Filters Carl Zeiss VEB Carl Zeiss JENA Prüfschein für Metallinterferenzfilter JF λmax τmax FWHM Typ Fabr.-Nr 350 350 nm 32% 20 nm ∅50 G6162/9 375 371 nm 32% 16 nm ∅50 G6366/8 400 402 nm 37% 12 nm ∅50 G6655/10 425 424 nm 48% 10 nm ∅50 F3960/4 436 438 nm 42% 8 nm ∅50 G7125/1 450 449 nm 32% 10 nm ∅50 G6407/9 475 476 nm 47% 9.5 nm ∅50 F4010/9 500 499 nm 44% 8.5 nm ∅50 G6424/3 525 525 nm 40% 6 nm ∅50 F4582/8 550 555 nm 43% 5.5 nm ∅50 G7168/9 575 574 nm 40% 7.5 nm ∅50 G6969/2 589 590 nm 43% 11 nm ∅50 G7131/5 600 594 nm 41% 8 nm ∅50 G6464/1 625 620 nm 39% 9 nm ∅50 G7078/1 650 655 nm 39% 7.5 nm ∅50 G6729/8 675 677 nm 42% 10 nm ∅50 G7053/2 700 698 nm 43% 11 nm ∅50 G7141/4 725 729 nm 42% 11.5 nm ∅50 G7186/4 750 749 nm 47% 11 nm ∅50 G7136/10 775 771 nm 37% 8.5 nm ∅50 G7157/6 800 800 nm 46% 12.5 nm ∅50 G6393/5 825 829 nm 47% 15.5 nm ∅50 G6401/9 850 852 nm 42% 10.5 nm ∅50 G6680/5 875 881 nm 42% 12 nm ∅50 G6328/10 900 909 nm 40% 8 nm ∅50 F4391/3 925 925 nm 32% 8.5 nm ∅50 F3843/9 950 942 nm 44% 19.5 nm ∅50 G6364/4 975 983 nm 42% 14 nm ∅50 G6342/2 1000 1005 nm 42% 15 nm ∅50 G7147/9 1025 1028 nm 40% 19.5 nm ∅50 G7133/3 1050 1048 nm 44% 13.5 nm ∅50 G7159/6 1075 1074 nm 30% 15 nm ∅50 G7095/6 1100 1099 nm 30% 15 nm ∅50 G7093/10 Appendix C: LED tests Blue LED 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Green LED 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Red LED 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Blue and Green LEDs 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Green and Red LEDs 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Blue and Red LEDs 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Appendix D: Laser diodes Red Laser Diode 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 650nm± 10 Channel 7 700 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Blue Laser Diode 1200 Ocean Optic HR2000+CG-UV-NIR Channel 1 1100 Channel 2 Channel 3 1000 Channel 4 900 Channel 5 Channel 6 800 Channel 7 700 405nm± 10 Channel 8 Channel 9 600 Channel 10 500 Channel 11 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Appendix E: Dyomics Dye Markers Orange Excitation (DY-605) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 7 +UV Channel 10 1000 900 800 700 Absorption/emission max: ≫ 600nm/624nm (in Ethanol) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] UV-Megastokes (DY-350XL) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 4 +UV Channel 11 1000 900 800 700 Absorption/emission max: ≫ 349nm/610nm (in Ethanol) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Rhodamine 6G (RH-6G) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 1 +UV Channel 3 1000 900 800 700 Absorption/emission max: ≫ 530nm/566nm 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] UV-Megastokes (DY-370XL) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 6 +UV 1000 900 800 700 Absorption/emission max: ≫ 368nm/473nm (in PBS) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Pyranine (HPTS) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 2 +UV Channel 8 1000 900 800 700 Absorption/emission max: ≫ 454nm/511nm (Buffers pH 9.0) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] Orange Excitation (DY-594) 1200 Ocean Optic HR2000+CG-UV-NIR 1100 Channel 5 +UV Channel 9 1000 900 800 700 Absorption/emission max: ≫ 594nm/615nm (in Ethanol) 600 500 400 300 200 100 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 Wavelenght, λ [nm] h [counts] h [counts] Appendix F: Code: Find rotation 1 void findRotation(Mat &image, Mat &mRgbRescale, Point2f &P1t ,Point2f & P2t ,double &resultDegrees,double posPeak){ 2 Mat blurr,threshold_output,ttadjMap; blur(image,blurr,Size(3,3)); 3 threshold(blurr,threshold_output,(float)(posPeak+posPeak/2)/1023.0, 4 1.0,THRESH_BINARY); 5 vector > contours; vector hierarchy; 6 //Put in 8C3 image to use "findContours" 7 double max,min; minMaxIdx(threshold_output, &min, &max); 8 convertScaleAbs(threshold_output, ttadjMap, 255/max); 9 findContours(ttadjMap, contours, hierarchy, CV_RETR_TREE, 10 CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); 11 vector minRect(contours.size()); //Rectangle 12 vector mu(contours.size()); //Get the moments 13 vector mc(contours.size()); //Get the mass centers 14 double maxArea=0; int posMax; //Find the biggest blob 15 for( uint i=0;imaxArea){maxArea=area; posMax=i;} }//for //Save i 21 Point2f rect_points[4]; minRect[posMax].points(rect_points); 22 double maxres=0; //Meassure the four sides of the rectangle 23 for(int j=0; j<4;j++){ 24 double res=cv::norm(cv::Mat(rect_points[j]), 25 cv::Mat(rect_points[(j+1)%4])); 26 if(res>maxres){maxres=res;} }//for //Save the largest 27 bool inMaxLine=true; 28 for(int j=0; j<4;j++){ 29 double res=cv::norm(cv::Mat(rect_points[j]), 30 cv::Mat(rect_points[(j+1)%4])); 31 if(res==maxres && inMaxLine){ //Check only the largest 32 inMaxLine=false; //Horizontal line as reference 33 Point2f P1=rect_points[j]; Point2f P2=rect_points[(j+1)%4]; 34 if (P1.x<=P2.x){resultDegrees=atan2((P2.y-P1.y),(P2.x-P1.x));} 35 else{resultDegrees=atan2((P1.y-P2.y),(P1.x-P2.x));} 36 //Move the line to where the canal begins 37 double dt=maxres/3; double h=-dt*cos(resultDegrees); 38 //Calculate equation from the first line, between P1 and P2 39 double m=(P2.y-P1.y)/(P2.x-P1.x); 40 double C=mc[posMax].y-m*mc[posMax].x; 41 double ncmy= m*(mc[posMax].x+h)+C;// y=mx+C 42 Point2f NewCM(mc[posMax].x+h,ncmy); //Return orthogonal line 43 PerpendicularLine(image,P1,P2,P1t,P2t,NewCM); } } }//if,for,void Appendix G: Code: Create floating points 1 //Create floating points (x,y) between two points 2 void CreatePoints2(Point2f P1t,Point2f P2t,vector&PLine){ 3 //Now the points between borders in P1t,P2t are created 4 double res = cv::norm(cv::Mat(P1t),cv::Mat(P2t)); 5 PLine.resize(floor(res)-1); //Round the next integer num. 6 vector xx; //Position x 7 vector yy; //Position y 8 vector da(2); //InputArray x 9 vector db(2); //InputArray y 10 vector man(2); //InputArray Size,length 11 vector angle(2); //Angle between P1t and P2t 12 da[0]=0; 13 da[1]=P2t.x-P1t.x; 14 db[0]=0; 15 db[1]=P2t.y-P1t.y; 16 phase(da,db,angle,true); //true:[degrees] false: [radians] 17 man[0]=0; 18 for (int i=0;i &Ins, 8 double limit, float &m0, bool &Switch, double n, 9 vector &dat){ 10 double factor=cv::norm(cv::Mat(Ins[0]),cv::Mat(Ins[n])); 11 double factor1=cv::norm(cv::Mat(Ins[0]),cv::Mat(Ins[n+1])); 12 double m; //Slope between to consecutive points 13 //Here it only sees the change in rows in only one column 14 Mat A; //Point 1 and 2, bilinear interpolation 15 getRectSubPix(image,Size(1,1),Ins[n],A); 16 float val= A.at(Point(0,0)); 17 getRectSubPix(image,Size(1,1),Ins[n+1],A); 18 float val1= A.at(Point(0,0)); 19 double PoY0=cvRound(val*1024); 20 double PoY1=cvRound(val1*1024); 21 m=(PoY1-PoY0)/1.0000; //Current line slope 22 //Condition 1: between 1 and -1 and on the low limit 23 if (PoY0>limit && m==0){ 24 //Image to find points (orange) 25 circle(salPo, Point(cvRound(val*1024),factor), 26 2, Scalar(230,165,0,100),20,4); 27 } //if 28 //Condition 2: slope crossing 0 29 if (PoY0>limit && m>=0 && m0<=0 || PoY0>limit && m<=0 && m0>=0){ 30 //Image to find points (light blue) 31 circle(salPo, Point(cvRound(val*1024),factor1), 32 2, Scalar(0,77,102,100),20,4);} //if 33 //Condition 3: when the curve cross the low limit 34 if (Switch && cvRound(val*1024)>limit){ 35 circle(salPo, Point(cvRound(val*1024),factor), 36 2, Scalar(255,255,255),20,4); //(z0) 37 Switch =false;} //if 38 if (!Switch && cvRound(val*1024)> pointsChannel, 3 vector widthChannel,vector> &SChannel){ 4 Point2f P1,P2,P3,P4; vector angle0; 5 vector angle90(2);vector xx; vector yy; 6 vector da(2); vector db(2); vector man(2); 7 da[0]=0; db[0]=0; man[0]=0; angle90[0]=0; //Fix initial values 8 for (int cc=1;cc<12;cc++){ //Accessing each channel 9 P1=pointsChannel[0][cc]; P2=pointsChannel[1][cc]; //Line 1 10 P3=pointsChannel[0][cc-1]; P4=pointsChannel[1][cc-1]; //Line 2 11 da[1]=P2.x-P1.x; db[1]=P2.y-P1.y; 12 phase(da,db,angle0,true); //360-angle0[1] -> alpha [degrees] 13 double alpha=360-angle0[1]; double theta=180-beta; //[degrees] 14 double gama=(90+alpha-beta); double limit=cos(gama*PI/180); 15 vector ForeplusChannel; //Vector with new matrices 16 double res = cv::norm(cv::Mat(P1),cv::Mat(P2)); //Line norm 17 Mat final2 = Mat::zeros(Size(int(res), 18 floor((widthChannel[cc]/limit))),src.type()); 19 Mat final3 = Mat::zeros(src.size(),src.type()); 20 for(int i=0;isrc.rows-1){break;} 30 if (Pn2.x>0 && Pn2.x0 && Pn2.y(Point(0,0));//Read the new value 37 final2.at(Point(i,j))=valn; //Remake a new matrix 38 float valc= src.at(Pn2); 39 final3.at(Pn2)=valc; } } } } //if,if,for,for 40 Rect myROI2(0, 0, final2.cols ,widthChannel[cc]-1); 41 Mat Channel2; Channel2=final2(myROI2); //Image with 32FC1 42 ForeplusChannel.push_back(Channel2);//Save a new matrix per channel 43 ForeplusChannel.push_back(final3); //New image only with one channel 44 SChannel.push_back(ForeplusChannel); } } //for,void Appendix J: Code: Wavelength calibration 1 //Find Center of mass per channel, findCM 2 //Input: SChannel, matrices with only one channel 3 //Output: vector with center of mass from all the blobs 4 void findCM(vector> SChannel,vector &CenterMass){ 5 for(uint i=0; i B 7 SChannel[i][3].copyTo(B); //Image only with one channel 8 blur(B,blurr,Size(9,9)); 9 double max,min; //Check max and min values 10 minMaxIdx(blurr,&min,&max); 11 if(max-float(80.0/1023) >= min+float(100.0/1023)){ 12 threshold(blurr,threshold_output,max-float(80.0/1023), 13 1.0,THRESH_BINARY); 14 vector > contours; 15 vector hierarchy; 16 Mat adjMap; 17 minMaxIdx(threshold_output, &min, &max); 18 //Put in 8C3 image to use "findContours" 19 convertScaleAbs(threshold_output, ttadjMap, 255/max); 20 findContours(adjMap, contours, hierarchy, CV_RETR_TREE, 21 CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); 22 //Get the moments 23 vector mu(contours.size()); 24 //Get the mass centers: 25 vector mc(contours.size()); 26 double maxArea=0; 27 int posMax=0; 28 for(uint i=0;imaxArea){maxArea=area;posMax=i;} 38 }//for 39 //Save the center of mass from the biggest blob 40 CenterMass.push_back(mc[posMax]); 41 }//if 42 }//for 43 }//void