Showing posts with label Medical AI. Show all posts
Showing posts with label Medical AI. Show all posts

Thursday, September 4, 2025

ai computer vision seeme detects consciousness coma

AI Tool SeeMe Detects Hidden Consciousness in Coma Patients Days Before Doctors

Study framework and analytical pipeline for SeeMe. Credit: Communications Medicine (2025). DOI: 10.1038/s43856-025-01042-y

A Groundbreaking Advance in Brain Injury Diagnosis

A team of scientists at Stony Brook University has unveiled and artificial Intelligence (AI) tool named SeeMe, capable of detecting subtle signs of consciousness in comatose brain injury patients days before traditional clinical methods. The research, published in Communications Medicine, demonstrates how SeeMe leverages computer vision to track micro facial movements--movements that are invisible to the human eye--providing clinicians with an earlier and more accurate picture of patient responsiveness.

This finding has far-reaching implications for critical care, rehabilitation strategies, and ethical decision-making in intensive care unit.

The Challenge of Detecting Consciousness in Coma Patients

When patients suffer acute brain injuries, doctors and families are often faced with agonizing uncertainty. Standard diagnostic methods, such as the Glasgow Coma Scale (GCS) or the Coma Recovery Scale-Revised (CRS-R), rely on visible signs of responsiveness like eye-opening or limb movement. However, these responses may appear days or even weeks after the brain begins to regain awareness.

While advanced imaging technologies such as EEG (electroencephalography) and fMRI (functional magnetic resonance imaging) can reveal hidden consciousness, they are expensive, not universally available, and impractical for repeated bedside monitoring. As a result, subtle or covert signs of consciousness often go unnoticed in the early stages of recovery.

This gap inspired researchers to design a low-cost, portable AI-based tool that could pick up on signs invisible to even experienced neurologists.

How SeeMe Works: Computer Vision at the Bedside

The SeeMe system uses high-resolution facial mapping, tracking pore-level movements at a scale of just ~0.2 millimeters. Patients are given simple verbal instructions such as:

  • "Open your eyes."
  • "Smile."
  • "Stick out your tongue."

SeeMe identifies subtle, stimulus-driven movements ahead of blinded rater detection in ABI coma patients. Credit: Communications Medicine (2025). DOI: 10.1038/s43856-025-01042-y

The AI then analyses subtle movement vectors--tiny changes in muscle activity--that may indicate a conscious response. Importantly, the tool was designed not to replace doctors but to complement clinical observations by adding a layer of measurable, objective analysis.

Study Design and Participants

  • 37 coma patients aged 18-85 with severe brain injuries (GCS ≤ 8) were recruited at Stony Brook University Hospital.
  • 16 healthy controls were also included for baseline comparisons.
  • Video sessions were recorded after sedation pauses deemed medically safe.
  • Responses were measured against standard scales: GCS eye score and CRS-R auditory score.

Two blinded raters evaluated each trial independently, applying strict screening rules to rule out artefacts. Non-parametric statistical methods, including the Kruskal-Wallis test and chi-square analyses, confirmed the robustness of the results.

Key Findings: Earlier and More Accurate Detection

The results revealed that SeeMe consistently detected signs of consciousness earlier than clinical examinations.

Eye-Opening Advantage

  • SeeMe: Detected eye-opening at an average of 9.1 (± 5.5) days post-injury.

  • Clinicians: Detected at an average of 13.2 (± 11.4) days post-injury.

  • Result: A 4.1-day lead time, which can make a critical difference in decision-making.

SeeMe detected eye-opening in 85.7% of patients (30 out of 36) compared to 71.4% with clinical exams (25 out of 36).

Mouth Movements and Tongue Responses

  • For patients without obstructing endotracheal tubes, SeeMe detected mouth movements in 16 of 17 cases (94.1%).

  • In seven patients where both SeeMe and clinical testing were possible, SeeMe identified consistent mouth responses 8.3 days earlier on average.

Neural Network Analysis

The team also trained a deep neural network on SeeMe-positive cases:

  • 81% accuracy in detecting command-specific eye-opening.
  • 65% overall accuracy across tasks.
  • Lower performance tongue protrusion (37%) and smiling (47%), showing that eye-opening is the strongest and most reliable indicator of consciousness.

Why Early Detection Matters

Early recognition of covert consciousness could dramatically reshape treatment pathways and ethical decisions in intensive care.

Supporting Families in Critical Decisions

Families often face life-or-death choices about continuing life support. If tools like SeeMe confirm early signs of awareness, they may feel more confident in pursuing rehabilitations instead of withdrawing care.

Improving Rehabilitation Outcomes

Rehabilitation is most effective when started early. By spotting responsiveness days in advance, SeeMe can help clinicians initiate therapies sooner, maximizing the patient's chances of recovery.

Complementing Clinical Practice

The researchers stress that SeeMe is not a replacement for traditional exams. Instead, it serves as a quantitative and objective partner, reducing the risk of overlooking subtle but meaningful responses.

Expert Insights

Lead investigators highlight the transformative potential of SeeMe.

According to the study authors:

"Patients with acute brain injury may display low-amplitude, stimulus-driven facial movements before visible responses appear. SeeMe demonstrates that some covertly conscious individuals show motor activity overlooked by clinicians."

This reinforces the importance of not underestimating early signals of life that traditional methods may miss.

The Future of Ai in Neurology

The SeeMe project marks an early step in the integration of AI-powered tools into bedside neurological care. Future developments could extend its application into:

  • Continuous ICU monitoring systems that alert doctors when hidden responses appear.
  • AI-assisted rehabilitation programs, tailoring therapy to subtle improvements.
  • Communication interfaces for patients who are conscious but unable to move or speak.

By bridging the gap between human observation and machine precision, AI  transform how we understand consciousness itself.

Giving a Voice to the Silent

The discovery of SeeMe effectiveness provides new hope for patients, families and clinicians navigating the uncertainty of coma care. By detecting responses invisible to the naked eye, this AI tool not only shortens the timeline for recognition of consciousness but also opens doors to earlier intervention, improved outcomes and more compassionate decision-making.

As AI technologies continue to merge with medicine, SeeMe highlights the powerful role of innovation in answering one of healthcare's most difficult questions: is anyone still there?

Source

"Discover how AI is reshaping brain injury care--follow us for breakthroughs at the intersection of medicine, technology and human resilience."

  • Human Health Issues - "Find trusted insights on neurological health, recovery and medical innovations at Human Health Issues."
  • FSNews365 - "Explore the latest breakthroughs in AI, science and technology shaping the future of human progress at FSNews365."
  • Earth Day Harsh Reality - "Learn how technology, environment, and climate realities influence global health and human survival at Earth Day Harsh Reality."

Sunday, June 29, 2025

AI detects dementia types brain scan

AI Tool Detects 9 Dementia Types from a Single Brain Scan: Mayo Clinic Breakthrough

Dr David Jones analyses brain imaging on-screen at the Mayo Clinic. Credit: Mayo Clinic

Breakthrough AI Tool from Mayo Clinic Enables Early and Accurate Dignosis

Study Findings and Diagnostic Performance

Researchers at the Mayo Clinic have unveiled an AI-driven tool enabling clinicians to detect brain activity signatures associated with nine forms of dementiaincluding Alzheimer'svia a single, commonly used scan, marking a significant leap forward in early and precise diagnosis.

According to a study published on 27 June 2025 in Neurology. StateViewer correctly identified dementia types in 88% of cases. The AI tool also expedited scan interpretation, achieving nearly double the speed and up to triple the accuracy of traditional diagnostic approaches. The system was trained and evaluated using more than 3,600 brain images, including those from healthy participants.

Addressing Diagnostic Challenges in Dementia Care

This breakthrough tackles a fundamental hurdle in dementia care—achieving early and accurate diagnosis, even in cases involving overlapping conditions. With new therapies on the horizon, prompt identification ensures patients receive the most suitable treatment when it proves most effective. The tool offers advanced diagnostic capabilities to practices without  specialist neurology support.

The Growing Burden of Dementia

Global Impact and Current Diagnostic Limitations

Dementia currently impacts over 55 million individuals worldwide, with almost 10 million new diagnoses annually. Alzheimer's disease, the predominant subtype, now ranks as the fifth most common cause of death. Diagnosis involves memory assessments, blood tests, scans, clinical evaluations and specialist referrals—yet even experts find it difficult to differentiate between Alzheimer's, Lewy body and frontotemporal dementias.

StateViewer: A Vision Realized through AI Collaboration

Leadership Behind the Innovation

StateViewer was created under the leadership of Dr. David Jones a consultant neurologist and head fo the Neurology Artificial Intelligence Programme at the Mayo Clinic.

"Each patient who enters my clinic brings with them a story uniquely influenced by the brain's intricate workings," says Dr. Jones. "That complexity is what attracted me to neurology and still fuels my dedication to clearer diagnoses. StateViewer embodies that passion—progress toward earlier insight, more accurate treatment and ultimately, altering the course of these conditions."

AI Engineering and Patient-Focused Design

To realize that vision, Dr. Jones collaborated with Leland Barnard, Ph.D. the data scientist spearheading the AI engineering of StateViewer.

"Throughout the development of StateViewer, we remained acutely aware that each data point and brain scan represented a person grappling with a challenging diagnosis and pressing concerns," says Dr. Barnard. "Witnessing how this tool can offer clinicians timely, accurate insights underscores the promise of machine learning within clinical practice."

Translating Brain Activity into Clinical Understanding

How the StateViewer Tool Works

The tool interprets an FDG-PET scan—used to assess how the brain metabolizes glucose energy—and cross-references it with a comprehensive database of scan from individuals with confirmed dementia, detecting patterns linked to distinct or overlapping dementia types.

Dementia Subtype Detection

Alzheimer's generally impacts regins associated with memory and cognition, while Lewy body dementia affects area governing attention and motor function. Frontotemporal dementia, meanwhile disrupts language and behavioural centers.

Visual Insights for All Clinicians

StateViewer employs colour-coded brain maps to visually depict these patterns, enabling all clinicians—even those without neurology backgrounds—to comprehend the AI's diagnostic rationale.

Future Prospects and Clinical Integration

Researchers at the Mayo clinic intend to broaden the tool's application and will carry on assessing it effectiveness across diverse clinical environemtns.

Source

Discover how AI is revolutionizing dementia diagnosis with tools like StateViewer—advancing early detection and personalized care. Stay informed on health innovations, Space Science, Physics and environmental news by visiting our trusted sources below:

  • Exploring cutting-edge insights on health challenges and solutions at Human Health Issues, your go-to resource for medical breakthroughs and wellness guidance.
  • Keep up with the latest in science and technology at FSNews365, delivering timely updates on AI, Space and more.
  • Learn about pressing environmental topics and sustainability efforts at Earth Day Harsh Reality, raising awareness on climate change and ecological preservation.

Saturday, March 8, 2025

AI deep tissue molecular imaging insect vision

The Future of Molecular Imaging: Insect Vision-Inspired Deep Tissue Mapping

Introduction: A Breakthrough in Molecular Imaging

Chemical tomographic image processing, inspired by the 'ommatidium' mechanism in 'bee vision,' is used to predict the heterogeneity of organoids. a) Hybrid compound mosaic images (2D tomography) are created by merging sensor chemistries within distinct layers. These images are processed using deep neural networks to predict organoid pixel information. b) Generative deep learning models, based on encoder-decoder U-net architectures, are employed for organoid image synthesis (details in Figure S14 of the Supporting Information). c) The application of 20 sensor chemistries from a spatiotemporally-resolved spectrometer results in synthetic superpositions for layers 2 and 3, creating distinctive mosaic arrangements (i-iii for M1-M3) while distinguishing them from media backgrounds (iv-v). Credit: Advanced Materials (2025), DOI: 10.1002/adma.202413017.

A recent study published in Advanced Materials unveils a novel technique for tracking molecular processes deep within tissue. Developed at the TechnionIsrael Institute of Technology, this breakthrough holds promise for advancing personlized medicine, cancer diagnostics, and early disease detection.

Study Leaders and Collaborators

The study was conducted under the leadership of Prof. Hossam Haick, alongside postdoctoral fellow Dr. Arnab Maity and Ph.D., candidate Vivian Darsa Maidantchik from the Technion's Wolfson Faculty of Chemical engineering. Collaborators included Dr. Dalit Barkan, research assistant Dr. Keren Weidenfeld, and Prof. Sarit Larisch from the Faculty of Natural Sciences at the University of Haifa.

Technion's Method: Functional and Molecular Mapping of Organoids

Technion researchers have developed a method for functional and molecular mapping of organoids3D cellular models that mimic the structural and functional properties of natural tissues. These organoids are instrumental in biomedical research, enabling scientists to:

  • Investigate disease mechanisms
  • Evaluate therapeutic interventions

Challenges in Organoid Tracking

Despite their promise, organoids encounter significant technological challenges, particularly in tracking internal tissue processes. Current techniques are costly and have notable drawbacks:

  • RNA sequencing: Destroy the tissue
  • Confocal Microscopy: Cannot probe deep-tissue activity

Technion's Cost-Effective & Non-Invasive Innovation

The Technion's innovation addresses these challenges with a cost-effective, precise, and non-invasive technique, enabling real-time monitoring of structural and molecular changes in organoids.

Visual of VOC spatiotemporal mapping and separation from organoids using a spatiotemporally-resolved spectrometer. b) VOC frequency spectrogram for analyzed organoids. c) 2D chemical tomography through sensor fusion. d) Schematic connecting VOCs with multi-dimensional imaging and cyto-proteo-genomics via generative AI. e) Scheme showing breast cancer progression through three stages: normal (MCF10A, M1), premalignant (MCF10AT, M2), and malignant (MCF10CA1h, M3), followed by f–h) DAPI staining, i–k) microscope imaging (x40, Bar = 50 µm), l) 3D representation, and m) western blot for mesenchymal markers (fibronectin, vimentin) and epithelial marker (E-Cadherin). n, o) Quantitative analysis of fibronectin and vimentin with statistical values. Credit: Advanced Materials (2025), DOI: 10.1002/adma.202413017.

Chemical Tomography: A New Method in Deep-Tissue Monitoring

The newly developed technique, known as chemical tomography, enables the functional analysis of tissues by detecting volatile organic compounds (VOCs) found in breath, saliva, sweat, and other bodily fluids.

VOC-Based Diagnostics in Disease Detection

Prof. Haick, a globally recognized authority in VOC-based diagnostics, has pioneered several breakthrough technologies for early disease detection.

Mapping the Molecular and Functional Landscape of Organoids

This study leveraged VOC monitoring to dynamically map the molecular and functional landscape of a human breast tissue organoid, uncovering critical protein and genomic markers linked to cancerous transformation.

AI-Driven Graphene Sensor System Inspired by Insect Vision

The system utilizes a graphene-based sensor array to detect VOCs, with the data being analyzed by generative artificial intelligence (AI).

Insect Vision as a Model for AI Processing

Drawing inspiration from insect compound eyes, where multiple small eyes provide various images to the brain:

  • Graphene sensors act as the 'eyes'
  • AI functions as the 'brain' for data processing and interpretation

Advantages of the New System

The innovative system enables real-time, dynamic mapping of organoids at a substantially reduced cost compared to conventional methods, while preserving tissue integrity. This breakthrough allows researchers to:

  • Monitor cancer progression across different stages
  • Enhance their understanding of cancer biology
  • Identify biochemical pathways, metabolic markers, and molecular processes involved in tumor development

Biochemical Pathways & Disease Biomarkers

Leveraging this novel approach, the researchers identified six key biochemical pathways responsible for generating 12 distinct VOCs, which have the potential to serve as biomarkers for various disease states.

Beyond Cancer: Future Applications in Personalized Medicine

Prof. Haick emphasized that beyond oncology, this system holds promise for diagnosing conditions affecting the kidneys, brain, and liver.

Real-Time Health Monitoring & AI Integration

Additionally, it can wirelessly transmit real-time physiological data to an external monitoring platform, facilitating continuous health tracking and early disease detectionmarking a significant advancement in AI-driven personalized medicine.

Source

Revolutionizing Deep Tissue Imaging with AI!

This breakthrough in molecular imaging is transforming personalized medicine and cancer diagnostics. With a graphene-based sensor array and AI-driven data analysis, researchers can now achieve real-time, non-invasive tissue mapping.

  • Stay Ahead in Medical Innovation! Learn more about cutting-edge health advancements on Human Health Issues.
  • Discover More Scientific Breakthroughs! Get the latest updates on AI, Space, Physics and medical technology at FSNew365.
  • Explore the Future of Science & Sustainability! Read insightful articles on global health and environmental challenges at Earth Day Harsh Reality.

Stay informed, share your thoughts, and join the conversation in the comments below!

Dendritic Nanotubes Alzheimers Discovery 2025

Dendritic Nanotubes: New Brain Bridges Linked to Early Alzheimer's Clues Edited by: Fasi Uddin Snapshot Scientists have identified a nov...