.01

ABOUT

PERSONAL DETAILS
Wangchan Valley 555 Moo 1 Payupnai, Wangchan, Rayong 21210
mapiconimg
Payongkit.L_s17@vistec.ac.th
Hello. I am a Programmer , I am passionate about programming and coding Welcome to my Personal and Academic profile Available as freelance

BIO

ABOUT ME

I'm a 25 year old junior data scientist. I am a passionate personal whose interest lies in utilizing machine learning skill with real world problems. I graduated from school of Mathermatics Institute of science at Suranaree University of Technology. Now, I'm a Ph.D. student in a school of Information Science and Technology (IST) at Vidyasirimedhi Institution of Science and Technology (VISTEC). My ongoing research is about Radar Technology, Emotion regcognition, Brain-Computer Interfaces, Neural Engineering with Machine Learning.

HOBBIES

INTERESTS

- Watch Online Documentaries
- Learn How To Cook
- Reading
- Listen To Podcasts

FACTS

FACTS ABOUT ME

.03

PUBLICATIONS

PUBLICATIONS LIST
12 Feb 2021

Sensor-Driven Achieving of Smart Living: A Review

IEEE Sensors Journal (Q1)


Journal Paper

Sensor-Driven Achieving of Smart Living: A Review

Journal Paper
About The Publication
This comprehensive review mainly analyzes and summarizes the recently published works on IEEExplore in sensor-driven smart living contexts. We have gathered over 150 research papers, especially in the past five years. We categorize them into four major research directions: activity tracker, affective computing, sleep monitoring, and ingestive behavior. We report each research direction’s summary by following our defined sensor types: biomedical sensors, mechanical sensors, non-contact sensors, and others. Furthermore, the review behaves as one-stop service literature appropriate for novices who intend to research the direction of sensor-driven applications towards smart living. In conclusion, the state-of-the-art works, the publicity available data sources, and the future challenge issues (sensor selection, algorithms, and privacy) are the major contributions of this proposed article.
12 Nov 2020

MetaSleepLearner: A Pilot Study on Fast Adaptation of Bio-signals-Based Sleep Stage Classifier to New Individual Subject Using Meta-Learning

IEEE Journal of Biomedical and Health Informatics (Q1)


Journal Paper

MetaSleepLearner: A Pilot Study on Fast Adaptation of Bio-signals-Based Sleep Stage Classifier to New Individual Subject Using Meta-Learning

Journal Paper
About The Publication
Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects. The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4\% to 17.7\% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.
22 Sep 2020

SleepPoseNet: Multi-View for Sleep Postural Transition Recognition Using UWB

IEEE Journal of Biomedical and Health Informatics (Q1)


Journal Paper

SleepPoseNet: Multi-View for Sleep Postural Transition Recognition Using UWB

Journal Paper
About The Publication
Recognizing the movements during sleep is crucial for the monitoring of patients with sleep disorders. However, the utilization of Ultra-Wideband (UWB) radar for the classification of human sleep postures has not been explored widely. This study investigates the performance of the off-the-shelf single antenna UWB in a novel application of sleep postural transition (SPT) recognition. The proposed Multi-View Learning, entitled SleepPoseNet or SPN, with time series data augmentation aims to classify four standard SPTs. SPN exhibits an ability to capture both time and frequency features, including the movement and direction of sleeping positions. The data recorded from 38 volunteers displayed that SPN with a mean accuracy of 73.7±0.8% significantly outperformed the mean accuracy of 59.9±0.7% obtained from deep convolution neural network (DCNN) in recent state-of-the-art work on human activity recognition using UWB. Apart from UWB system, SPN with the data augmentation can ultimately be adopted to learn and classify time series data in various applications
15 Jul 2019

Consumer Grade Brain Sensing for Emotion Recognition

IEEE Sensors Journal (Q1)


Journal Paper

Consumer Grade Brain Sensing for Emotion Recognition

Journal Paper
About The Publication
For several decades, electroencephalography (EEG) has featured as one of the most commonly used tools in emotional state recognition via monitoring of distinctive brain activities. An array of datasets has been generated with the use of diverse emotion-eliciting stimuli and the resulting brainwave responses conventionally captured with high-end EEG devices. However, the applicability of these devices is to some extent limited by practical constraints and may prove difficult to be deployed in highly mobile context omnipresent in everyday happenings. In this study, we evaluate the potential of OpenBCI to bridge this gap by first comparing its performance to research grade EEG system, employing the same algorithms that were applied on benchmark datasets. Moreover, for the purpose of emotion classification, we propose a novel method to facilitate the selection of audio-visual stimuli of high/low valence and arousal. Our setup entailed recruiting 200 healthy volunteers of varying years of age to identify the top 60 affective video clips from a total of 120 candidates through standardized self assessment, genre tags, and unsupervised machine learning. In addition, 43 participants were enrolled to watch the pre-selected clips during which emotional EEG brainwaves and peripheral physiological signals were collected. These recordings were analyzed and extracted features fed into a classification model to predict whether the elicited signals were associated with a high or low level of valence and arousal. As it turned out, our prediction accuracies were decidedly comparable to those of previous studies that utilized more costly EEG amplifiers for data acquisition.
28 Oct 2018

Deep Neural Networks with Weighted Averaged Overnight Airflow Features for Sleep Apnea-Hypopnea Severity Classification

TENCON 2018: Jeju, South Korea


Conferences

Deep Neural Networks with Weighted Averaged Overnight Airflow Features for Sleep Apnea-Hypopnea Severity Classification

Conferences
About The Publication
For several decades, electroencephalography (EEG) has featured as one of the most commonly used tools in emotional state recognition via monitoring of distinctive brain activities. An array of datasets has been generated with the use of diverse emotion-eliciting stimuli and the resulting brainwave responses conventionally captured with high-end EEG devices. However, the applicability of these devices is to some extent limited by practical constraints and may prove difficult to be deployed in highly mobile context omnipresent in everyday happenings. In this study, we evaluate the potential of OpenBCI to bridge this gap by first comparing its performance to research grade EEG system, employing the same algorithms that were applied on benchmark datasets. Moreover, for the purpose of emotion classification, we propose a novel method to facilitate the selection of audio-visual stimuli of high/low valence and arousal. Our setup entailed recruiting 200 healthy volunteers of varying years of age to identify the top 60 affective video clips from a total of 120 candidates through standardized self assessment, genre tags, and unsupervised machine learning. In addition, 43 participants were enrolled to watch the pre-selected clips during which emotional EEG brainwaves and peripheral physiological signals were collected. These recordings were analyzed and extracted features fed into a classification model to predict whether the elicited signals were associated with a high or low level of valence and arousal. As it turned out, our prediction accuracies were decidedly comparable to those of previous studies that utilized more costly EEG amplifiers for data acquisition.
.04

RESEARCH

LABORATORY TEAM

Dr.Theerawit Wilaiprasitporn

Faculty Member

Nannapas Banluesombatkul

Phd. student

Payongkit Lakhan

Phd. student

RESEARCH PROJECTS

Emotion Recognition

For several decades, electroencephalography (EEG) has featured as one of the most commonly used tools in emotional state recognition via monitoring of distinctive brain activities. An array of datasets has been generated with the use of diverse emotion-eliciting stimuli and the resulting brainwave responses conventionally captured with high-end EEG devices. However, the applicability of these devices is to some extent limited by practical constraints and may prove difficult to be deployed in highly mobile context omnipresent in everyday happenings. In this study, we evaluate the potential of OpenBCI to bridge this gap by first comparing its performance to research grade EEG system, employing the same algorithms that were applied on benchmark datasets. Moreover, for the purpose of emotion classification, we propose a novel method to facilitate the selection of audio-visual stimuli of high/low valence and arousal. Our setup entailed recruiting 200 healthy volunteers of varying years of age to identify the top 60 affective video clips from a total of 120 candidates through standardized self assessment, genre tags, and unsupervised machine learning. In addition, 43 participants were enrolled to watch the pre-selected clips during which emotional EEG brainwaves and peripheral physiological signals were collected. These recordings were analyzed and extracted features fed into a classification model to predict whether the elicited signals were associated with a high or low level of valence and arousal. As it turned out, our prediction accuracies were decidedly comparable to those of previous studies that utilized more costly EEG amplifiers for data acquisition.

Sleep Posture by UWB radar

Recognizing the movements during sleep is crucial for the monitoring of patients with sleep disorders. However, the utilization of Ultra-Wideband (UWB) radar for the classification of human sleep postures has not been explored widely. This study investigates the performance of the off-the-shelf single antenna UWB in a novel application of sleep postural transition (SPT) recognition. The proposed Multi-View Learning, entitled SleepPoseNet or SPN, with time series data augmentation aims to classify four standard SPTs. SPN exhibits an ability to capture both time and frequency features, including the movement and direction of sleeping positions. The data recorded from 38 volunteers displayed that SPN with a mean accuracy of 73.7±0.8% significantly outperformed the mean accuracy of 59.9±0.7% obtained from deep convolution neural network (DCNN) in recent state-of-the-art work on human activity recognition using UWB. Apart from UWB system, SPN with the data augmentation can ultimately be adopted to learn and classify time series data in various applications.

MetaSleepLearner

Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects. The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4\% to 17.7\% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.

Brain Biometrics

.05

TEACHING

CURRENT
  • 2019
    NOW
    Walailak University, Thailand

    Programming Fundamental

    POSN Science Camp

    Teach fundamental programming, basic programming
  • 2019
    NOW
    Walailak University, Thailand

    Data structure and algorithms

    POSN Science Camp

    Basic Data Structures, Asymptotic analysis (Big-O notation), Basic Recursion, Greedy Algorithms, Dynamic Programming, Graph Algorithm.
TEACHING HISTORY
  • 2015
    2018
    Suranaree University of Technology

    PROGRAMMING FUNDAMENTAL

    POSN SCIENCE CAMP

    Teach fundamental programming, basic programming.
  • 2016
    2018
    NAKHON RATCHASIMA, THAILAND

    Calculus

    Upgrad tutor

    Concepts of Function, Limits and Continuity, Differentiation Rules, Application to Graphing, Rates, Approximations, and Extremum Problems, Definite and Indefinite Integration, The Fundamental Theorem of Calculus.
  • 2016
    2018
    NAKHON RATCHASIMA, THAILAND

    Object Oriented Programming

    Upgrad tutor

    Basic syntax of Java, Objects and Classes, OOP concepts
.08

CONTACT

Drop us a line

GET IN TOUCH

Address:

Vidyasirimedhi Institute of Science and Technology (VISTEC).

Wangchan Valley 555 Moo 1 Payupnai, Wangchan, Rayong 21210 Thailand

Tel:

+(66) 33-014-444

Email:

Payongkit.L_s17@vistec.ac.th