• Volume 12,Issue 2,2022 Table of Contents
    Select All
    Display Type: |
    • Preface to the Special Issue on Robust Machine Learning for Open Scenarios

      2022, 12(2):153-156. DOI: 10.21655/ijsi.1673-7288.00265

      Abstract (231) HTML (0) PDF 564.35 K (306) Comment (0) Favorites

      Abstract:Preface to the Special Issue on Robust Machine Learning for Open Scenarios

    • Multi-granularity Inter-class Correlation Based Contrastive Learning for Open Set Recognition

      2022, 12(2):157-175. DOI: 10.21655/ijsi.1673-7288.00266

      Abstract (359) HTML (0) PDF 6.03 M (332) Comment (0) Favorites

      Abstract:In recent years, deep neural networks have continuously achieved breakthroughs in the classification task. However, they will mistakenly give a wrong known class prediction when faced with unknown samples in the testing phase. The open set recognition is a possible way to solve the problem, which requires the model to not only classify the known classes but also distinguish the unknown samples accurately. Most of the existing methods are designed heuristically on the basis of certain assumptions. Despite keeping the performance increasing, they have not analyzed the key factors that affect the task. In this paper, we analyze the commonalities of existing methods by designing a new decision variable experiment and find that the ability of the model to learn representations of known classes is an important factor. Then an open set recognition method is proposed based on the representation learning ability enhancement of the model. Firstly, due to the powerful representation learning capabilities demonstrated by the contrastive learning and the label information contained in the open set recognition task, supervised contrastive learning is introduced to improve the modeling ability of the model for known classes. Secondly, considering that the inter-class correlation is the representation learning at the class level, and the hierarchical structure relationship among the classes is often presented, a loss function of the multi-granularity inter-class correlation is designed. In the way of building the hierarchical structure in the label semantic space and measuring the multi-granularity inter-class correlation, the loss function of multi-granularity inter-class correlation constrains the model to learn the correlation among different known classes to further improve the representation learning ability of the model. Finally, experimental results on multiple standard datasets verify the effectiveness of the proposed method in open set recognition.

    • Towards Robust Adversarial Training via Dual-label Supervised and Geometry Constraint

      2022, 12(2):177-193. DOI: 10.21655/ijsi.1673-7288.00268

      Abstract (115) HTML (0) PDF 3.39 M (313) Comment (0) Favorites

      Abstract:Recent studies have shown that adversarial training is an effective method to defend against adversarial sample attacks. However, existing adversarial training strategies improve the model robustness at a price of a lowered generalization ability of the model. At this stage, the mainstream adversarial training methods usually deal with each training sample independently and ignore the inter-sample relationships, which prevents the model from fully exploiting the geometric relationship between samples to learn a more robust model for better defense against adversarial attacks. Therefore, this paper focuses on how to maintain the stability of the geometric structure between samples during adversarial training to improve the model robustness. Specifically, in adversarial training, a new geometric structure constraint method is designed with the aim to maintain the consistency of the feature space distribution between normal samples and adversarial samples. Furthermore, a dual-label supervised learning method is proposed, which leverages the labels of both natural samples and adversarial samples for joint supervised training of the model. Lastly, the characteristics of the dual-label supervised learning method are analyzed, and the working mechanism of the adversarial samples are explained theoretically. It is concluded from extensive experiments on benchmark datasets that the proposed approach effectively improves the robustness of the model while maintaining good generalization accuracy. The related code has been open-sourced: https://github.com/SkyKuang/DGCAT

    • Unsupervised New-set Domain Adaptation with Self-supervised Knowledge

      2022, 12(2):195-211. DOI: 10.21655/ijsi.1673-7288.00269

      Abstract (197) HTML (0) PDF 2.25 M (293) Comment (0) Favorites

      Abstract:Unsupervised Domain Adaptation (UDA) aims to use the source domain with large amounts of labeled data to help the learning of the target domain without any label information. In UDA, the source and target domains are usually assumed to have different data distributions but share the same class label space. Nevertheless, in real-world open learning scenarios, label spaces are highly likely to be different across domains. In extreme cases, the domains share no common classes, i.e., all classes in the target domain are new classes. In such a case, direct transferring the class-discriminative knowledge from the source domain may impair the performance in the target domain and lead to negative transfer. For this reason, this paper proposes unsupervised new-set domain adaptation with self-supervised knowledge (SUNDA) to transfer the sample contrastive knowledge from the source domain, and use self-supervised knowledge from the target domain to guide the knowledge transfer. Specifically, the initial features of the source and target domains are learned by self-supervised learning, and some network parameters are frozen to preserve target domain information. Sample contrastive knowledge from the source domain is then transferred to the target domain to assist the learning of class-discriminative features in the target domain. Moreover, graph-based self-supervised classification loss is adopted to handle the problem of target domain classification with no inter-domain common classes. SUNDA is evaluated on tasks of cross-domain transfer for handwritten digits without any common class and cross-race transfer for face data without any common class. The experiments show that SUNDA outperforms UDA, unsupervised clustering, and new class discovery methods in learning performance.

    • Deep Generative Crowdsourcing Learning with Worker Correlation Utilization

      2022, 12(2):213-230. DOI: 10.21655/ijsi.1673-7288.00270

      Abstract (56) HTML (0) PDF 6.08 M (288) Comment (0) Favorites

      Abstract:Traditional supervised learning requires the groundtruth labels for the training data, which can be difficult to collect in many cases. In contrast, crowdsourcing learning collects noisy annotations from multiple non-expert workers and infers the latent true labels through some aggregation approach. In this paper, we notice that existing deep crowdsourcing work does not sufficiently model worker correlations, which is, however, shown to be helpful for learning by previous non-deep learning approaches. We propose a deep generative crowdsourcing learning approach to incorporate the strengths of Deep Neural Networks (DNNs) and exploit worker correlations. The model comprises a DNN classifier as a prior and an annotation generation process. A mixture model of workers' capabilities within each class is introduced into the annotation generation process for worker correlation modeling. For adaptive trade-off between model complexity and data fitting, we implement fully Bayesian inference. Based on the natural-gradient stochastic variational inference techniques developed for the Structured Variational AutoEncoder (SVAE), we combine variational message passing for conjugate parameters and stochastic gradient descent for DNN parameters into a unified framework for efficient end-to-end optimization. Experimental results on 22 real crowdsourcing datasets demonstrate the effectiveness of the proposed approach.

    • Confidence-weighted Learning for Feature Evolution

      2022, 12(2):231-243. DOI: 10.21655/ijsi.1673-7288.00271

      Abstract (160) HTML (0) PDF 5.62 M (329) Comment (0) Favorites

      Abstract:Several recent works have studied feature evolvable learning. They usually assume that features would not vanish or appear in an arbitrary way; instead, old features vanish and new features emerge as the hardware device collecting the data features is replaced. However, the existing learning algorithms for feature evolution only utilize the first-order information of data streams and ignore the second-order information which can reveal the correlations between features and thus significantly improve the classification performance. We propose a Confidence-Weighted learning for Feature Evolution (CWFE) algorithm to solve the aforementioned problem. First, second-order confidence-weighted learning is introduced to update the prediction model. Next, to make full use of the learned model, a linear mapping is learned in the overlapping period to recover the old features. Then, the existing model is updated with the recovered old features and, at the same time, a new prediction model is learned with the new features. Furthermore, two ensemble methods are introduced to utilize the two models. Finally, experimental studies show that the proposed algorithms outperform existing feature evolvable learning algorithms.

    • Image Style Transfering Based on StarGAN and Class Encoder

      2022, 12(2):245-258. DOI: 10.21655/ijsi.1673-7288.00267

      Abstract (136) HTML (0) PDF 5.19 M (318) Comment (0) Favorites

      Abstract:The image style transfer technology has been integrated into people's lives and is widely used in practical scenarios such as artistic images, photo to cartoon, image coloring, filter processing, and occlusion removal, which bears important research significance and application value. StarGAN is a generative adversarial network framework used in recent years for multi-domain image style transfer, which extracts features through simple down-sampling and then generates images through up-sampling. However, the background color information and detailed features of characters' faces in the generated images are greatly different from those in the input images. In this paper, the network structure of StarGAN is improved, and a UE-StarGAN model for image style transfer is proposed by introducing U-Net and edge-promoting adversarial loss function. At the same time, the class encoder is introduced into the generator of the UE-StarGAN model, and an image style transfer model fusing class encoder based on a small sample size is designed to realize the image style transfer with a small sample size. The experimental results reveal that the model can extract more detailed features and has some advantages in the case of a small sample size. The images obtained by applying the image style transfer based on the proposed model are improved in both qualitative and quantitative analyses, which verifies the effectiveness of the proposed model.