Brain-computer for generating the control commands and changing the

Brain-computer interface wheelchair: Brain-computer interface devices are a widely explored area to research about. For interacting with the brain signals, it is important to create an interface between brains and computers. The main purpose of the BCI in the communication channel with people who suffer from different disabilities. And this communication is established by the brain activity and the facial expression EOG and EMG signals which is going to be used in the wheelchair (Gan et al, 2007). The brain computer interface wheelchair uses Electromyography and electrooculography signals. EMG signals are used for controlling the directions and EOG signals are used for the speed of the wheelchair (Gan et al, 2017). These signals are received by Cyberlink system that is used for generating the control commands and changing the state of the wheelchair either being in a control-state or non-control state (Gan et al, 2007). When the wheelchair is in a non-control state the user can do any activity without “considering the triggering control commands” (Gan et al,2017). The signals are sent to the on-board PC, then these signals are implanted for observing the analysis of human intention and the decision making (Jang et al, 2016). The digital signal processing (DSP) motion control will receive these implementations and actuate the motor for the direction wanted by the user (Jang et al, 2016). Moreover, by using this device is it easier to diagnose the disease the person suffering from by the various signals obtained from EMG (Desai et al,2017). According to the research of computation simplicity of Meor et al (2015), the time domain is not consistent although EMG signal was used in essence with the frequency domain, therefore this is a disadvantage for the wheelchair is it can lead of not having access to the domain. In addition, Shinde et al (2016) said that these wheelchairs can also have a disadvantage if a wrong command is sent to the wheelchair from the user undesired interaction, for that the user can be harmed, and also the other people around him. The signals can be varied from different peoples, and noticing the details of a particular brain and the small changes in the signals (EOG or EMG) can be challenging, as these signals contaminated usually with undesirable signals and some external noises (Gan et al, 2007). Figure (1): brain computer interface (Rebsamen et al,2007) Head-gesture recognition wheelchair: This wheelchair uses the head movements to control the direction of the wheelchair. As mentioned by Ruzaij and Poonguzhali (2012), this wheel chair compares the location of the lips with fixed rectangular windows. When the head turns up, down, left or right the rectangular windows of the lips will change (Zhang-fang et al, 2010). Head-gesture recognition wheelchair uses Adaboost algorithm, as the recognition of the facial expression or the head-gestures is recognised by Adaboost. As these gestures are detected it is sent to the recognizer, that recognize these gestures. These recognition results are sent to the converter that operates the wheelchair movements (Ju et al,2010).  Figure (2) from Zhang-fang (2010) explains the Adaboost algorithm for the different locations of the movements as when the head turns up, the detected window of the lip is on the upper side of the thick solid line of the rectangular windows, and this phenomenon happens to all of the different directions, as the detected windows of the lip is on the side of the desired direction of the thick solid rectangular window. The head turning up means that the wheelchair should move fast and the turning down means the wheelchair speed should slow down. The direction of the wheelchair while turning has the same angular velocity and line speed of 20degrees/sec and 20cm/sec respectively (Zhang-fang et al, 2010). Figure (2): Head-gesture recognition (Zhang-fang, 2010) However, since the smart wheelchairs have on-board computing restricted, the algorithm of Adaboost used cannot met the performance of real-time. Although, the face tracking algorithm can run very fast, it is not robust of having a noisy background and changing in illumination conditions (Pei et al, 2007).  Results: Based on the three devices mentioned, these devices are widely used nowadays, however some of these types cannot satisfy some of the user requirements therefore, researches proposed a new design based on these devices. The new smart wheelchair should be a self-navigation for people with disability as it has to have sonars for navigating in the narrow paths, in addition, it should have the capability of passing doors and following straight lines until the user tells him to stop or change the direction, and the main important feature is the obstacle avoidance (Leaman et al, 2017). Researchers suggested that smart wheelchairs should have a pre-programmed map or routes so it is easier for the wheelchair to navigate to the desired location, also for reducing the concerns that are related to the smart wheelchair and for the wheelchair to be able to build an individualized profile it should be able to communicate effectively with the user (Leaman et al, 2017). Desai et al (2017) said that in the future smart wheelchairs should have an add-on approach which allows the system used in the wheelchair to be used in a different wheelchair, and it should be supported with all the types either by having a voice recognition, head gesture recognition or brain-interface computer depends on the user decision. Lots of people complained about wearing the EMG electrode devices as its not appropriate for their looks, therefore researches are thinking of using a wireless device, as EMG signals are efficient to use in the wheelchairs for the directions. Also, the wheelchair should be able to deal with the uncertainty in sensing that is considered as a challenge as it needs to understand, analyse, perceive and react to the human activity. Voice recognition, head gesture recognition or the EMG signals needs to be developed and handle a vast range of possible inputs and situations (Leaman et al,2017).