• Hancock Lawrence posted an update 1 month, 3 weeks ago

    Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects. The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4% to 17.7% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.In this article, we present a novel lightweight path for deep residual neural networks. see more The proposed method integrates a simple plug-and-play module, i.e., a convolutional encoder-decoder (ED), as an augmented path to the original residual building block. Due to the abstract design and ability of the encoding stage, the decoder part tends to generate feature maps where highly semantically relevant responses are activated, while irrelevant responses are restrained. By a simple elementwise addition operation, the learned representations derived from the identity shortcut and original transformation branch are enhanced by our ED path. Furthermore, we exploit lightweight counterparts by removing a portion of channels in the original transformation branch. Fortunately, our lightweight processing does not cause an obvious performance drop but brings a computational economy. By conducting comprehensive experiments on ImageNet, MS-COCO, CUB200-2011, and CIFAR, we demonstrate the consistent accuracy gain obtained by our ED path for various residual architectures, with comparable or even lower model complexity. Concretely, it decreases the top-1 error of ResNet-50 and ResNet-101 by 1.22% and 0.91% on the task of ImageNet classification and increases the mmAP of Faster R-CNN with ResNet-101 by 2.5% on the MS-COCO object detection task. The code is available at https//github.com/Megvii-Nanjing/ED-Net.Deep neural networks (DNNs) are shown to be excellent solutions to staggering and sophisticated problems in machine learning. A key reason for their success is due to the strong expressive power of function representation. For piecewise linear neural networks (PLNNs), the number of linear regions is a natural measure of their expressive power since it characterizes the number of linear pieces available to model complex patterns. In this article, we theoretically analyze the expressive power of PLNNs by counting and bounding the number of linear regions. We first refine the existing upper and lower bounds on the number of linear regions of PLNNs with rectified linear units (ReLU PLNNs). Next, we extend the analysis to PLNNs with general piecewise linear (PWL) activation functions and derive the exact maximum number of linear regions of single-layer PLNNs. Moreover, the upper and lower bounds on the number of linear regions of multilayer PLNNs are obtained, both of which scale polynomially with the number of neurons at each layer and pieces of PWL activation function but exponentially with the number of layers. This key property enables deep PLNNs with complex activation functions to outperform their shallow counterparts when computing highly complex and structured functions, which, to some extent, explains the performance improvement of deep PLNNs in classification and function fitting.Recently, there are many works on discriminant analysis, which promote the robustness of models against outliers by using L₁- or L2,1-norm as the distance metric. However, both of their robustness and discriminant power are limited. In this article, we present a new robust discriminant subspace (RDS) learning method for feature extraction, with an objective function formulated in a different form. To guarantee the subspace to be robust and discriminative, we measure the within-class distances based on L2,s-norm and use L2,p-norm to measure the between-class distances. This also makes our method include rotational invariance. Since the proposed model involves both L2,p-norm maximization and L2,s-norm minimization, it is very challenging to solve. To address this problem, we present an efficient nongreedy iterative algorithm. Besides, motivated by trace ratio criterion, a mechanism of automatically balancing the contributions of different terms in our objective is found. RDS is very flexible, as it can be extended to other existing feature extraction techniques. An in-depth theoretical analysis of the algorithm’s convergence is presented in this article. Experiments are conducted on several typical databases for image classification, and the promising results indicate the effectiveness of RDS.We developed a new grip force measurement concept that allows for embedding tactile stimulation mechanisms in a gripper. This concept is based on a single force sensor to measure the force applied on each side of the gripper, and substantially reduces tactor motion artifacts on force measurement. To test the feasibility of this new concept, we built a device that measures control of grip force in response to a tactile stimulation from a moving tactor. We calibrated and validated our device with a testing setup with a second force sensor over a range of 0 to 20 N without movement of the tactors. We tested the effect of tactor movement on the measured grip force, and measured artifacts of 1% of the measured force. We demonstrated that during the application of dynamically changing grip forces, the average errors were 2.9% and 3.7% for the left and right sides of the gripper, respectively. We characterized the bandwidth, backlash, and noise of our tactile stimulation mechanism. Finally, we conducted a user study and found that in response to tactor movement, participants increased their grip force, the increase was larger for a smaller target force, and depended on the amount of tactile stimulation.