• Volume 9,Issue 2,2020 Table of Contents
    Select All
    Display Type: |
    • >Biomedicine and Biomedical Engineering
    • Nano-Gold Flexible Sensor Based Gesture Motion Recognition with Different Training Modes

      2020, 9(2):1-8. DOI: 10.12146/j.issn.2095-3135.20191122001

      Abstract (1004) HTML (0) PDF 1.75 M (1947) Comment (0) Favorites

      Abstract:In the study of prosthetic control techniques, researchers usually decode surface electromyography (sEMG) to obtain the amputee’s intention of motion. The traditional sEMG electrode usually requires direct contact with the skin by conductive paste to reduce the impedance between the skin and the electrode, which may cause skin allergies and physical discomfort. sEMG is also easily affected by muscle fatigue, which is inconvenient in long-term monitoring. To address the above issues, this study used a nano-gold flexible sensor to decode the deformation signal generated by muscle contraction and explored the classification performance of two different training modes. The first mode was the sequential training mode, where each action was repeated three times, and the second one was the random training mode, where the order of actions was randomized, and each action only appeared once. The results show that the average gesture recognition rate of all subjects is above 90%, and there is no significant difference between the two training modes (sequential training mode is 95.46%, random training mode is 94.18%, P-value is 0.227 5). The experimental results demonstrate that the nano-gold flexible sensor, like the wet electrode, enables reliable gesture recognition.

    • Displacement Accuracy of Magnetic Resonance Acoustic Radiation Force Imaging on Phantom Surface

      2020, 9(2):9-16. DOI: 10.12146/j.issn.2095-3135.20190927001

      Abstract (898) HTML (0) PDF 2.58 M (1957) Comment (0) Favorites

      Abstract:Magnetic resonance acoustic radiation force imaging (MR-ARFI) is a developing ultrasound focalization technology. It can localize the focus of ultrasound by detecting the local micron-scale displacement in the tissue induced by acoustic radiation force. Based on the displacement map, transcranial focus localization and phase aberration correction can be conducted. However, only few studies focused on the accuracy of displacement detection using MR-ARFI. In this paper, the displacement detection accuracy of MR-ARFI was investigated. Under completely consistent experiment conditions (input power, ultrasonic duration and pulse repetition frequency), the displacement measured by laser Doppler technique was used as the gold standard and compared with the displacement measured by MR-ARFI. The results demonstrate that the maximum displacement and the displacement map measured by laser Doppler and MR-ARFI show high consistency. The research proves that MR-ARFI technology has high displacement accuracy, can reflect the real displacement in the object, and has great potentials in brain science.

    • Head and Neck CT Segmentation Based on a Combined U-Net Model

      2020, 9(2):17-24. DOI: 10.12146/j.issn.2095-3135.20191216001

      Abstract (1038) HTML (0) PDF 1.93 M (2088) Comment (0) Favorites

      Abstract:Head and neck (HaN) segmentation in CT image is difficult due to low contrast and large slice thickness that resulted in useless information in coronal and sagittal plane for some organs. In addition, complex and small organs have different requirements on neural network modeling. To achieve an accurate segmentation of 22 HaN organs, we combined three U-Net models. The first model was a 2D model that is advantage for dealing with thick slice images. The second model was a 3D model using a cropped input to cover most organs with original resolution in the transverse plane. The third model was a 3D-small U-Net model that focuses on the segmentation of two small organs together and uses a small region of interest (ROI) computed from the bounding box of 2D model segmentation. All the three models were trained using nnUNet method. The final trained model was submitted through docker image to the StructSeg challenge. The leaderboard showed that the proposed method achieved the second place among all methods on ten unseen testing cases with an average Dice value of 80.66% and 95% Hausdorff distance of 2.96 mm.

    • Study on Co-Expression of Cannabinoid Receptor 1 with Parvalbumin in the Hippocampal Neurons

      2020, 9(2):25-37. DOI: 10.12146/j.issn.2095-3135.20200209001

      Abstract (969) HTML (0) PDF 2.65 M (2507) Comment (0) Favorites

      Abstract:The endocannabinoid system, which widely distributes in the mammalian central nervous system, is involved in regulating a variety of physiological processes and behavioral responses. Cannabinoid receptor 1 (CB1R) is abundantly localized on the neuronal synapse terminals that expresses various receptors of neurotransmitters, such as somatostatin, cholecystokinin, dopamine and serotonin. Activation of CB1R inhibits the release of presynaptic neurotransmitters, and retrogradely controls neuronal excitabilities. However, there are few reports on the expression of CB1R in parvalbumin (PV) positive gamma-aminobutyric acid interneurons in the central nervous system. In this study, immunofluorescence staining and laser confocal imaging techniques were used to identify the PV+ and CB1R+ cell in the hippocampus. We found that a small number of neurons in the hippocampus of male and female CB1R-iCre-EGFP mice were co-expressed with PV and CB1R. Moreover, there was no sex difference of the distribution of PV+/CB1R+ neurons. These results could provide a reference value for further elucidating the functional interaction between PV and CB1R, and also contribute to a more comprehensive understanding of the regulatory role of endocannabinoid system.

    • >Electronic Information
    • An Accurate Bin-Picking System Using Temporal Encoded Structured Light Sensor

      2020, 9(2):38-49. DOI: 10.12146/j.issn.2095-3135.20200110001

      Abstract (1030) HTML (0) PDF 2.30 M (2347) Comment (0) Favorites

      Abstract:Robot bin-picking system is an important component in many automation applications, such as sorting, loading, and unloading. However, it is difficult to calculate the pose of object reliably, quickly, and accurately in the environment where the object is obscured or placed disorderly. This paper proposes a 3D vision-guided bin-picking system for free-form objects. In this system, the structured light systems are used to carry out high-precision 3D reconstruction of the targets and to establish the off-line 3D template library. Furthermore, standard template is matched with the point cloud of the scene after preprocessing. After the matching parameters are obtained, the robot’s posture is calculated by the transformation matrix between the coordinate systems, and the robot is guided to complete the grasp of the target object. Experimental results show that the robot bin-picking system can grasp irregular targets reliably, quickly, and precisely.

    • XGBoost-Based Gene Network Inference Method for Steady-State Data

      2020, 9(2):50-59. DOI: 10.12146/j.issn.2095-3135.20191231001

      Abstract (1102) HTML (0) PDF 1.41 M (2550) Comment (0) Favorites

      Abstract:Inferring gene regulatory networks (GRNs) from steady gene expression data remains a challenge in systems biology. There are a large number of potential direct or indirect regulatory relationships that are difficult to be identified by traditional methods. To address this issue, we propose a new method based on boosting integrated model, and apply randomization and regularization to solve the model over fitting problem. For the inconsistent weights from different subproblems, we integrate normalization and statistical methods to deal with the initial weights. Using the benchmark datasets from DREAM5 challenges, it shows that our method achieves better performance than other state-of-the-art methods. In the simulated data set generated by in-silico, the two evaluation indicators of area under precision-recall curves (AUPR) and area under receiver operating characteristic (AUROC) are significantly better than existing methods, and the accuracy is higher in the real experimental data of two organisms, E.coli and S.cerevisiae. Especially for AUROC, the indicators are higher than the existing best methods.

    • A Deep Learning-Based Spatio-Temporal Features Extraction Method for Network Flow

      2020, 9(2):60-69. DOI: 10.12146/j.issn.2095-3135.20191231002

      Abstract (1397) HTML (0) PDF 3.35 M (3973) Comment (0) Favorites

      Abstract:Network intrusion detection is one of the core research areas of cyber security. Network traffic anomaly detection is common in network intrusion detection systems. Through monitoring the network traffic, network intrusion detection systems can effectively track anomalous traffic and then give out alerts. This research area has developed for decades and the conventional methods for network intrusion detection systems include rule-based and feature engineering based methods. However, the changing features of network traffic require the methods to continuously gather new rules and generate new features, which results in a labor-intensive workload and comparatively poor quality of features engineering. To solve this problem, a deep learning-based spatial-temporal features extraction method was proposed. It includes convolution neural networks and long short term memory neural networks to learn the spatial-temporal features of network raw traffic. This method is tested on the MAWILab network traces data to evaluate its effectiveness. Multi-layer perception, convolution neural networks alone and long short term memory are used for comparison with the proposed approach. The features generated by these methods are used to classify the traffic, which can assess the performance of the feature extraction process of each method. Experiments show that the proposed method outperforms other methods in its effectiveness of spatial-temporal features extraction.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

Most Read

Most Cited

Most Downloaded

Baidu
map