Teaching Assistant/ PhD candidate
- 1/2017 12/2021Indianapolis, Indiana, USA
Purdue University IndianapolisMajor in Computer & Information Science, Purdue School of Science. Dissertation Title: Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments. Expertise: Computer Graphics, Visualization, and Imaging. Relevant Coursework: Design and Analysis of Algorithms, Image Processing and Computer Vision, Compiling and Programming Systems, Advanced Graphics and Data Visualization, Computer Graphics, Compiling and Programming Systems.
- 01/2015 12/2016Indianapolis, Indiana, USA
Purdue University IndianapolisMajor in Computer & Information Science, Purdue School of Science. Relevant Coursework: Image Processing and Computer Vision, Advanced Graphics and Data Visualization, Computer Graphics, Data Mining, Concepts in Computer Organization, Database Systems, Cryptography, Programming Languages, Advanced Mobility and Cloud Computing, Trustworthy Computing.
- 09/2001 07/2005Medina, Saudi Arabia
Taibah UniversityMajor in Computer Science, School of Computer Science & Engineering Thesis: E-Learning database & website services for Taibah University. Relevant Coursework: Discrete Mathematics, Numerical Computation, Computer Graphics, Artificial Intelligence, Theory of Computation.
- Member of Courses Quality Committee. (2011-2013)
- Academic Adviser. (2012-2013)
- The 2nd IEEE International Conference on Information and Computer Technologies (ICICT'19)
- The 2019 IEEE SoutheastCon
- The 36th Computer Graphics International (CGI'19) in cooperation with ACM SIGGRAPH and Eurographics
- the 6th International Conference on Augmented Reality, Virtual Reality and Computer Graphics (SALENTO AVR'19)
- The 8th ACM International Symposium on Pervasive Displays (PerDis'19)
The Quranic Structure Visualization Using D3
This research addresses a structure comprehension problem and suggests visualization as a solution. Learning a book, in general, begins with reading, underlying significant concepts, writing comments, summarizing paragraphs. It would be meaningful to present this effort in a way that makes it memorable. This paper is an opportunity to demonstrate how scientists developed many solutions for complex data visualization problems. It is a project that involves modeling data visualization algorithms and techniques on a real-world dataset. In this project, we explore the use of node-link (force layout tree) visualization as the most convenient technique to show the hierarchy of the Quran chapters and verses “Sura”. The interactive user ability provides an important tool to illustrate more hidden data combined with the colors features.
CUDA Based Volume Rendering
The rapid development of image quality accompanied with rendering speed inquiries have been a challenge to the programmers involved in large scale volume rendering, especially for medical datasets. This project purpose is to perform volume rendering using the GPU which with its parallel processing has a massive contribution in this field. The final results would allow the user to interact and explore the data using three-dimensional visualization techniques. The implementation would use a different type of datasets type such as medical CT scan, binary data, and segmentation, the use of CUDA framework, for this project would significantly decrease the cost of volume analysis time.
Mining Brain Network of Alzheimer’s Disease Patients
Efficient diagnosis of Alzheimer's disease (AD), the most regular kind of dementia in elderly patients, is of essential significance in medical research. Late studies have exhibited that AD is firmly identified with the structure change of the brain network, i.e., the connectivity among different brain regions. The brain region of interest (ROIs) and neural connections among them will provide useful pattern-based biomarkers to recognize healthy control (HC), patients with Mild Cognitive Impairment (MCI), and patients with AD. In this paper, we investigate multiple machine learning algorithms for identifying the connectivity among different brain regions. Our results demonstrate that PCA, LDA, K-mean, DBSCAN and SVM algorithms are valuable in revealing the brain region connectivity similarity and differences among these groups which could help to diagnose Alzheimer's disease.
Synchronization in Parallel Programming Case study: OpenMP, CUDA, GO
Programming parallel machines as adequately as successive ones would preferably require a language that provides high-level programming constructs to avoid the programming errors which are frequent while communicating in a parallel environment. In as much as undertaking parallelism is regularly viewed as more error-prone than data parallelism, we survey three mainstream and productive parallel programming languages that handle this troublesome issue: OpenMP, CUDA, and GO. Using examples of parallel implementation that handle synchronization, this paper depicts how the fundamentals of parallel programming, namely collective and point-to-point synchronization, are managed in these languages. Our study proposes that, despite the fact that there is an abundance of different names and thoughts presented by these Languages, the paper is intended to give users and designers (of language and compiler) a diagram of current parallel languages.
Internet of Things (IoT), Middleware Architecture, Based on Smart-Home: Survey
Recently, the internet of things (IoT) and home energy administration framework get to be noticeable subjects, electronic appliances acknowledgment innovation can offer clients some assistance with identifying the electronic machines being employ and assist enhancing power consumption practice. Nevertheless, as well known by the power consumption practice for home clients, it is conceivable to all the while switch on and off electronic machines. In this manner, this survey presents a review of smart home and appliances among the Internet of thing (IoT) concept following three section of outlines: First section we discuss several features and characteristics that are desired in producing a practical architecture of IoT; Second section we show one conceivable architecture that mirrors the configuration standards sketched out in the previous section; Third section we present some of the currently applicable cloud architecture for smart homes and take a closer look at what each design could provide and how it might solve some of the issues that could encounter the system.
Bitcoin: A Deeper Look on Cryptocurrency Concepts and Challenges
Bitcoin is a well-known cryptographic currency system up to date which opens a wide scope for digital currencies and impacts several surrounding fields by promoting noteworthy researches and interests. This survey presents the structure of several research outcomes that develop the whole concept of crypto-currency. Our approach involves these steps: First, we introduce a background about the Bitcoin protocol and its building blocks. Second, we compare the online banking model with the Bitcoin model with key points further than decentralization. Third, we explore some attacks and vulnerability in Bitcoin structure along this survey and then include more attacks that do not mention until the end. In the process we explain and discuss, numerous essential methods that have the same concepts as the traditional currency transaction approach, which could in science what is more than one specific digital currency.
Don’t Photoshop it! Artistic Image Filtering
Non-photorealistic rendering is a method of image manipulation that modifies images for artistic purposes. The photographs can be stylized by simplifying or emphasizing the perceptual information. Most of the operations are done using image filtering by means of convolution between an image and a kernel. Other possible operations are: image adjustment, image arithmetic, color conversion, and color quantization. These operations are used to implement three artistic effects in this project: cartoon effect, sketch effect, and gilding effect. In addition, built-in Matlab functions are used to create three more effects: dithering operation to pointillize the image, dilation operation with a circular structuring element, and inversion operation.
In this project, a camera calibration method was implemented. A set of data points was manually picked from the given image. They were used in the numerical calculations for estimating the intrinsic and extrinsic camera parameters. It was found that reprojected coordinates are pretty close to the picked coordinates. In addition, the estimation process was repeated for a few times and estimated parameter variations were analyzed in detail.
This calibration algorithm assumes no skew and no nonlinear distortions. Nevertheless, the obtained results are fairly accurate which means this method is applicable for most situations.
Low-Level Image Processing
The goal of this project is to implement several point and neighborhood processing functions. Image transformation and filtering are useful for low-level image processing. A point transformation can be applied for contrast stretching or histogram modifications. A spatial filtering operation can be applied to reduce the unwanted noise in a particular image. Using appropriate methods for different cases can result in efficient tools for image restoration and enhancement.
To compare outputs with Photoshop versions in some quantitative way, the corresponding images were superimposed as layers in Photoshop using the Difference layer mode. Then the Histogram window was used to find out the mean value of pixel differences.
The largest errors are observed for log mapping and image rotation. The former can be explained by using different curves for nonlinear transformation (log mapping versus gamma correction). The possible reason for the latter is that even small rounding errors in rotated pixel coordinates result in many pixels being slightly off. The histogram equalization and Gaussian filtering have smaller errors, which can be explained by rounding issues in pixel values. Finally, the result of median filtering is exactly the same as Photoshop version, since the algorithms are identical and there are no rounding operations in neighborhood processing.
Edge and feature detection
Canny edge detection is a reliable method to extract edges from various images. Defined as a sequence of 5 steps, it produces results by smoothing the image to reduce noise, finding directional derivatives to obtain edge magnitudes and angles, suppressing non-maxima for edge thinning, double thresholding to find strong and weak edges, and tracking by hysteresis to find true edges. It was possible to automate the whole process for multiple images using the same parameters.
On the other hand, the Hough transform is a less reliable method to extract straight lines from the image. The quantization of the parameter space leads to apparent inaccuracies. In addition, it is difficult to detect peaks reliably. In complicated cases, this method results in spurious lines which are hard to get rid of. Therefore, it is suitable for simple images and requires careful tuning.
Bears In Canada: Database System
The database Design Specialist, Inc. and me as DA (Data architecture) for this project, we designed and implemented a prototype database for the Bears in Canada Project – BIC project. The motivation for this project is to keep a record of what is going on in various research studies where samples are located by several classifications (telemetry, hair snags, and scat) located by special canines (dogs) and some technicians.
The motivation for the project is declining populations of bears. The project is about tracking the location and health of bears in or near a national park in Canada. Tracking is done by three different methods: telemetry tracking (after a bear has been captured and a tracking device has been attached), hair snags (found on bushes and trees in the area), and scats (animal feces) located by specially trained dogs. DNA data from hair and scat samples can be used to determine the sex of the bear and some health data. The scat samples provide individualized data on the physiological health of that animal.
Furthermore, the report presents the logical and physical data model which depict a better view of these entities and its attributes and the relationships between them. You may refer to the contents to see the main sections included in the report. For any improvement don’t hesitate to contact us. We work to make you satisfied.
|UT Official Site||https://www.ut.edu.sa/en/web/u12562|
|IUPUI Official Site||https://science.iupui.edu/people/alhakamy-a%27aeshah|