We have developed the world's first bionic logic-based vision system, combining feature understanding and logical reasoning to enable AI to not only recognize images but also comprehend scenes and make robust, human-like decisions.
Overcomes the limitations of traditional vision and deep learning
Dramatically reduces dependency on massive datasets
Highly explainable, aligned with future regulatory standards
Rapid generalization to unseen conditions
Traditional computer vision requires manual feature engineering and careful classifier selection for each specific task.
System performance heavily depends on the expertise of engineers and often demands significant time and labor for tuning and validation.
CNNs learn patterns from massive datasets, just like a baby learns from experience.
However, their performance is highly dependent on dataset coverage, and any missing conditions require costly data recollection and retraining.
Inspired by human vision, our system combines feature-based perception with logical reasoning to selectively ignore irrelevant objects (such as noise or obstacles).
This approach greatly reduces dependency on massive datasets, enhances robustness, explainability, and development efficiency.
Key Advantages
Overcomes the limitations of traditional vision and deep learning
Dramatically reduces dependency on massive datasets
Explainable and aligned with human reasoning
Rapid generalization to unseen conditions
Modern deep learning architectures, such as Convolutional Neural Networks (CNNs), are fundamentally built upon Perceptron Operators.
Each Perceptron node operates by:
Calculating a weighted sum of inputs
Adding a bias
Applying an activation function
The core computation model can be described as:
Output = (Weight × Input) + Bias
Essentially, CNNs heavily rely on multiplication and addition operations.
The diagram demonstrates how basic logic operations like AND, OR, and NOR must be manually constructed inside a Perceptron by carefully tuning weights and biases.
Every logical operation, no matter how simple, is realized through a combination of multiplications and additions.
Using Perceptrons to approximate logic operations inherently requires:
Multiple rounds of weight adjustment
Careful tuning of biases
Iterative training and correction
Even for very simple logic like AND/NOR, training is needed to find the correct parameters.
This approach is fundamentally:
Computation-heavy
Resource-wasteful
Sensitive to initial conditions and data distributions
Implementing even basic logic operations via Perceptrons involves heavy training, resource consumption, and instability depending on initialization and data.Modern deep learning architectures, such as Convolutional Neural Networks (CNNs), are fundamentally built upon
Direct Application of Logic Operators.
In contrast, the logic operator-based method:
Directly applies logic rules (AND, OR, NOT, NOR, etc.)
Requires no multiplication, no training, no floating-point operations
Performs reasoning instantly with 100% accuracy and stability
Example: XNOR Logic - Demonstrating the True Power of Direct Reasoning
In this example, we illustrate the handling of XNOR logic operation, where the output is 1 if both inputs are the same, and 0 otherwise:
Method 1: To implement an XNOR operation using a Perceptron architecture:
It requires a combination of multiple operations (AND + NOR + OR).
Each sub-operation must be implemented with specifically tuned weights and biases.
The overall computational cost and structural complexity increase significantly.
Implementing XNOR with Perceptrons demands multiple layers of sub-operations (AND, NOR, OR), each involving weighted sums and bias adjustments, greatly increasing computational load and system complexity.
Method 2: Using Direct Logic Operators.
A single XNOR gate directly achieves the desired logic.
No multiplication, no addition, and no parameter tuning are needed.
Immediate and stable results are guaranteed.
Using a direct XNOR logic gate, the logic operation is achieved in one simple step without any weight adjustment or training, resulting in instant and stable output.
To effectively handle real-world visual tasks, our system does not abandon CNN, but instead smartly combines:
The spatial feature extraction capabilities of CNN, and
The highly efficient reasoning power of Logic Operators.
This hybrid architecture enables:
Faster processing speed
Lower computational resource demands
Higher reasoning stability
Ideal performance for Edge AI and low-power environments — without the need for high-end GPUs
Our system leverages Convolutional and logical operators for feature extraction and decision-making, achieving high efficiency, accuracy, and deployability, even without expensive hardware.
In backlight conditions, strong vertical light beams often appear in camera images. Traditional computer vision methods (e.g., Mobileye) using edge detection may mistake these beams as strong edges, resulting in failures in lane and vehicle detection.
Our bionic logic-based strategy addresses this in two key steps:
Category Description
Dark Gray Non-edge
Light Gray Weak edge
Red Horizontal edge
Blue Diagonal edge
Green Vertical edge
The vanishing point typically lies near the upper center of the image.
Road lanes should always appear below the vanishing point (since they are on the ground plane).
Therefore, any strong vertical edges above the vanishing point — such as vertical light beams or trees — can be logically excluded as non-road elements.
Successfully ignore disturbances caused by backlight
Consistently detect correct lanes and vehicles
Improve system robustness and reliability under challenging lighting conditions
In ADAS applications, red, green, and yellow colors are critical cues representing traffic lights, brake lights, and road markings.
However, in low-resolution images (e.g., 320×240), even the human eye struggles to accurately identify these color features.
Using high-resolution images (e.g., 1280×720) improves detection but dramatically increases computational cost and latency.
Using our Logic-Based Feature Processing, we can robustly extract meaningful color features even from low-resolution images.
Apply logical inference on low-resolution images to deduce meaningful red, yellow, and green regions.
Robustly adapt to different color shifting conditions (nighttime, backlight, weather changes).
Minimize dependency on high-resolution processing, significantly reducing computational resources.
Successfully detect key color features (e.g., red tail lights, traffic lights) from 320×240 images.
Maintain robust recognition under severe color shift conditions (nighttime, rainy days).
Enhance overall system performance and real-time capability, ideal for embedded and low-power platforms.
To demonstrate the feasibility and advantages of our technology, the following real-world examples illustrate how our Bionic Logic-Based Strategy enables robust and efficient extraction of critical feature information under challenging conditions.
These cases cover common real-world difficulties such as extreme lighting changes, low resolution, color shifts, and backlight disturbances, showcasing the strong adaptability and practical deployability of our system.
This demonstrates the real-world robustness of our logic-based strategy.