Welcome to the IKCEST
Journal
IET Image Processing

IET Image Processing

Archives Papers: 474
IEEE Xplore
Please choose volume & issue:
Securing DICOM images by a new encryption algorithm using Arnold transform and Vigenère cipher
Mohamed BoussifNoureddine AlouiAdnene Cherif
Keywords:cryptographydata compressionimage codingimage colour analysisimage processingDICOM imagesrecent image encryption algorithmtypical image encryption algorithmJPEG compressioncolour imagesArnoldVigenère cipher algorithmimage block-by-blocksize 16 × 16 pixelsmedical applicationsnovel encryption method
Abstracts:This study presents a novel encryption method for the security of DICOM (Digital Imaging and Communications in Medicine) images used in medical applications. The proposed algorithm splits the original image into blocks of size 16 &#x00D7; 16 pixels, then encrypts it through three steps. Firstly, the keys <i>k</i><sub>1</sub>, <i>k</i><sub>2</sub>, <i>k</i><sub>3</sub> and <i>k</i><sub>4</sub> are transformed from four vectors of 16 pixels to a matrix of 16 &#x00D7; 16 pixels. Then, the proposed algorithm encrypts the image block-by-block using the Vigene&#x0300;re cipher algorithm. For each block, the proposed algorithm modifies the key using Arnold transform. The proposed encryption algorithm is scalable with colour images and JPEG (Joint Photographic Experts Group) compression. The cryptanalysis of the proposed algorithm demonstrates that it passed the cryptography attacks tests with success. Its running time shows that it is faster than a typical and recent image encryption algorithm.
Generalised deep learning framework for HEp-2 cell recognition using local binary pattern maps
Buda BajićTomáš MajtnerJoakim LindbladNataša Sladoje
Keywords:biological techniquesimage classificationimage texturelearning (artificial intelligence)medical image processingdeep learning based image classificationensemble approachtexture informationlocal binary patterns mapseffective cell image classifiergeneralised deep learning frameworkHEp-2 cell recognitionHEp-2 cell image classifierlocal binary pattern mappingserum evaluation
Abstracts:The authors propose a novel HEp-2 cell image classifier to improve the automation process of patients' serum evaluation. The authors' solution builds on the recent progress in deep learning based image classification. They propose an ensemble approach using multiple state-of-the-art architectures. They incorporate additional texture information extracted by an improved version of local binary patterns maps, &#x03B1;LBP-maps, which enables to create a very effective cell image classifier. This innovative combination is trained on three publicly available datasets and its general applicability is demonstrated through the evaluation on three independent test sets. The presented results show that their approach leads to a general improvement of performance on average on the three public datasets.
Compressive sensed video recovery via iterative thresholding with random transforms
Evgeny BelyaevMarian CodreanuMarkku JunttiKaren Egiazarian
Keywords:compressed sensingdiscrete cosine transformsdiscrete wavelet transformsfiltering theoryimage denoisingimage reconstructioniterative methodsmedical image processingtransformsvideo signal processingframe residual computation algorithmvideo block-matchingrandom 2D discrete waveletrandom shiftfixed block sizerandom selectionblock-based 2D discrete cosinesimple exampleresulting pixel valuedifferent transformsiterationfixed sparsifyingiterative thresholding algorithmrandom transformsvideo recoverycompressive
Abstracts:The authors consider the problem of compressive sensed video recovery via iterative thresholding algorithm. Traditionally, it is assumed that some fixed sparsifying transform is applied at each iteration of the algorithm. In order to improve the recovery performance, at each iteration the thresholding could be applied for different transforms in order to obtain several estimates for each pixel. Then the resulting pixel value is computed based on obtained estimates using simple averaging. However, calculation of the estimates leads to significant increase in reconstruction complexity. Therefore, the authors propose a heuristic approach, where at each iteration only one transform is randomly selected from some set of transforms. First, they present simple examples, when block-based 2D discrete cosine transform is used as the sparsifying transform, and show that the random selection of the block size at each iteration significantly outperforms the case when fixed block size is used. Second, building on these simple examples, they apply the proposed approach when video block-matching and 3D filtering (VBM3D) is used for the thresholding and show that the random transform selection within VBM3D allows to improve the recovery performance as compared with the recovery based on VBM3D with fixed transform.
Robust random walk for leaf segmentation
Jing HuZhibo ChenRongguo ZhangMeng YangShuai Zhang
Keywords:biology computingimage segmentationprobabilitysmooth segmentation edgesunconstrained leaf imagesrobust leaf segmentationsmoothed leaf segmentationspecified connected pixelsnonlocal pixelsleaf surfaceilluminationpairwise pixelsleaf shapesimaging conditionsrobust random walk
Abstracts:In this study, the authors focus on the task of leaf segmentation under different imaging conditions (e.g. backgrounds and shadows). A new method - robust random walk (RW) is proposed to propagate the prior of user's specified pixels. Specifically, they first employ RWs to take the relationship of pairwise pixels into consideration. A superpixel-consistent constraint is added to make the edges of segmentation smooth. Owing to the effect of illumination, some parts of a leaf surface are brighter than others and it may further harm the subsequent label propagation. To address this problem, they learn a common subspace by taking into account the illumination of local and non-local pixels. By doing so, it has good adaptability to process noise interfering and non-uniform illumination. In addition, since RW only considers the pairwise relationship of pixels, it will be sensitive to the specified and connected pixels. Thus, they further employ a log-likelihood ratio to predict the probability of a pixel belonging to the background and use it to guide the label propagation. Based on the proposed method, they can obtain a smoothed and robust leaf segmentation. Experimental results on unconstrained leaf images demonstrate the efficiency of their algorithm.
Empirical wavelet transform-based fog removal via dark channel prior
Manas SarkarPriyanka Rakshit SarkarUjjwal MondalDebashis Nandi
Keywords:equalisersfogimage colour analysisimage enhancementimage restorationinverse transformsobject detectionwavelet transformsfog removal techniqueDCPempirical wavelet transformation coefficientsfoggy input imagewavelet coefficientsembedded wavelet transformed imageoutput imageadaptive histogram equalisation techniqueinverse transformed imagesharp contrast outputhigh contrast outputcontrast-enhanced imageS-channelV-channel gain adjustmentdark channelimage processingsurveillance systemdefogging techniquescolour-line modeldense fog
Abstracts:Haze and fog removing from videos and images has got massive concentration in the field of video and image processing because videos and images are severely affected by fog in tracking and surveillance system, object detection. Different defogging techniques proposed so far are based on polarisation, colour-line model, anisotropic diffusion, dark channel prior (DCP) etc. However, these methods are unable to produce output image with desirable quality in the presence of dense fog and sky region. In this study, the authors have proposed a novel fog removal technique where DCP is applied on the low-frequency component of empirical wavelet transformation coefficients of the foggy input image. They apply unsharp masking on wavelet coefficients of the embedded wavelet transformed image for improving the sharpness of the output image. Later contrast limited adaptive histogram equalisation technique is used as a post-processing task to the inverse transformed image for producing the sharp and high contrast output. Finally, the colour and intensity of the contrast-enhanced image are uplifted through S-channel and V-channel gain adjustment. The proposed method provides significant improvement to the overall quality of the output image compared to contemporary techniques. The quantitative and qualitative measurements confirm the claims.
Image DAEs based on residual entropy maximum
Qian XiangLikun PengXueliang Pang
Keywords:convolutional neural netsGaussian noiseimage denoisingimage restorationlearning (artificial intelligence)maximum entropy methodsmean square error methodsimage residual statistical featuresRiesz feature similarity metric indexespeak SNRpriors based methodsoriginal mean-square-error loss function DAEsimproved DAEsimage informationaugmented Lagrange function methodimproved training algorithmresidual statisticsordinary DAEstraining loss functionmaximum entropy principleimage restoration qualitytarget imagesnoisy imagesend-to-end mappingsdenoising auto-encodersimproved convolution neural network auto-encodersimage processingnonGaussian noiselow signal-to-noise ratioimage denoisingresidual entropy maximumimage DAEs
Abstracts:Image denoising under low signal-to-noise ratio (SNR) and non-Gaussian noise is still a challenging problem in image processing. In this study, the authors prose a kind of improved convolution neural network auto-encoders for image denoising. Different from other priors based methods, the denoising auto-encoders (DAEs) can learn end-to-end mappings from noisy images to the target ones. This study research statistical features of image residual between the restored images and target images. According to the maximum entropy principle, the training loss function of the ordinary DAEs was modified with residual statistics as the constraint condition, and an improved training algorithm was proposed based on augmented Lagrange function method. Thus, the quality of restored image can be improved through removing image information from residual more efficiently. Experiments show not only the denoising effects of improved DAEs is superior to the original mean-square-error loss function DAEs in both peak SNR and Riesz feature similarity metric indexes, but also has the ability to suppress the different types of noises with different levels through a single model.
Efficient symmetric image encryption by using a novel 2D chaotic system
Huiqing HuangShouzhi YangRuisong Ye
Keywords:chaoscryptographyimage codingimage processingtransformswhite noisedigital images securitynovel 2D chaotic systemefficient symmetric image encryptionscrambled imagechaotic sequencesnovel image encryption algorithmexisting 1D chaotic mapscomplex chaotic behaviourswider chaotic rangesdifferent 1D chaotic maps2D chaotic mapstwo-dimensional chaotic systemwhite noise image
Abstracts:The authors know that a common and effective way to protect digital images security are to encrypt these images into white noise image. In this study, the authors have designed a new two-dimensional (2D) chaotic system which is derived from two existing one-dimensional (1D) chaotic maps. The simulation results show that the new 2D chaotic system is able to produce many 2D chaotic maps by selecting different 1D chaotic maps, and which have the wider chaotic ranges and more complex chaotic behaviours compared with the existing 1D chaotic maps. In order to investigate its applications, using the proposed 2D chaotic maps, the authors design a novel image encryption algorithm. First of all, the original image is scrambled by using the chaotic sequences which are generated by new 2D chaotic maps, Arnold transform and Hilbert curve. Then the scrambled image is confused and diffused by chaotic sequences. Finally, the performance of the proposed encryption algorithm is simulated and the experimental results show that the validity and reliability of the proposed algorithm is validated.
Joint rain and atmospheric veil removal from single image
Zetian MiYafei WangCongcong ZhaoFengming DuXianping Fu
Keywords:Gaussian processesimage colour analysisimage enhancementimage restorationlearning (artificial intelligence)rainvisibilityjoint rainatmospheric veil removalsingle imagenatural rainy scenesnearby individual rain streaksatmospheric veiling effectdistant accumulated rainrain accumulationgeneralised rain modelatmospheric lightnatural heavy rain scenario
Abstracts:In natural rainy scenes, visibility is significantly degraded by two types of phenomena: specular highlights of nearby individual rain streaks and atmospheric veiling effect caused by distant accumulated rain. However, most existing deraining methods only take the first kind of degradation into consideration, which limits their potential application in heavy rain. In this study, a joint rain and atmospheric veil removal framework is proposed to address this problem. Since rain streaks and rain accumulation are entangled with each other, which is intractable to simulate, causing clean/rainy image pairs of real-world are hard to generate. Hence, after introducing a generalised rain model, which can represent both rain streaks and atmospheric veil physically, the authors do not learn the mapping function between image pairs using deep-learning architecture, but estimate the rain streaks, transmission, and atmospheric light via Gaussian mixture model patch prior and dark channel prior to solve the rain model instead. According to the comprehensive experimental evaluations, the proposed method outperforms other state-of-the-art methods in terms of both high visibility and vivid colour, especially in natural heavy rain scenario.
Smoke detection in ship engine rooms based on video images
Kyung-Min ParkCherl-O Bae
Keywords:computer visionfeature extractionfire safetyimage classificationimage motion analysisobject detectionshipssmokesmoke detectorssupport vector machinesvideo signal processingship engine roomsfire detection systemsheat detectionvideo smoke detection systemsVSD systemsmoke generatormotion detectionsupport vector machine classifiernonsmoke regionreal-time smoke detection systemsvideo framesvideo imageslocal binary pattern descriptorfeature vector extractionmachine vision
Abstracts:Fire detection systems in ships are based on smoke and heat detection in accordance with safety regulations. The rapid advancement of machine vision technology has led to the development of video smoke detection (VSD) systems. In this study, a VSD system is applied to smoke detection within the engine room of the ship. A dataset for a range of scenarios was created with a smoke generator. The method for smoke detection was based on motion detection and a support vector machine classifier, which was used to make candidate regions and perform classification. A local binary pattern descriptor was used to extract the feature vector. A training set was made from a variety of video frames, randomly. Experimental results seldom produced false positive windows in the non-smoke region. However, if the greyscale value of difference image between background and the smoke is lower than the setting value for motion detection, the system could not detect smoke. Processing time is sufficiently fast for use in real-time smoke detection systems. To install a VSD system on-board a vessel, the authors recommend a performance standard of the system which must be met.
Two-stage image smoothing based on edge-patch histogram equalisation and patch decomposition
Yepeng LiuXiang MaXuemei LiCaiming Zhang
Keywords:edge detectionfeature extractiongradient methodsimage colour analysisimage enhancementimage segmentationimage textureminimisationsmoothing methodsL0 gradient minimisationimage abstractionedge detectionimage processingpatch boundariessmooth componentedge pixel ratenonedge-patchedge distributionimage patchesedge mapimage segmentationtexture regiontwo-stage image smoothing methodstructural edgesedge-patch histogram equalisationcontent-aware image resizing
Abstracts:Part of important structural edges in the image is smoothed due to the small gradients, while the others are preserved with greater gradients. Therefore, the authors propose a two-stage image smoothing method based on edge-patch histogram equalisation and patch decomposition. The authors' purpose is to increase the gradient of important structural edges while reducing the gradient of the texture region. Therefore, they divide the image into edge-patches where the structural edges are concentrated or non-edge-patches where the texture details are concentrated by image segmentation. The edge-patch needs to be equalised by the histograms for increasing the gradient of the edge pixels. All patches are decomposed to extract the smooth component for reducing the gradient of pixels. The smooth component of each patch is smoothed via L<sub>0</sub> gradient minimisation. In order to ensure the continuity of the patch boundaries, the edge-patch is inversely equalised. Finally, the whole image is smoothed via L<sub>0</sub> gradient minimisation for removing residual textures and seams. Experimental results demonstrate that the proposed method is more competitive in maintaining important structural edges and removing texture details than the state-of-the-art approaches. The proposed method can be applied to many areas of image processing.
Hot Journals