You are viewing a single comment's thread from:

RE: LeoThread 2024-10-13 12:37

in LeoFinance3 months ago

Here is the daily technology #threadcast for 10/13/24. We aim to educate people about this crucial area along with providing information of what is taking place.

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  

Technology brings quality of life to hypertensive patients. At an accelerated pace, digital blood pressure monitors are more accurate, favoring the well-being of these patients with more reliable blood pressure readings.

#technology #life

Healthcare is a huge opportunity for disruption from #technology

Disruptive technologies are of interest at the forefront of our medical practice because they have revolutionized by meeting needs and generating innovation.

#technology #life #medicine

However, it is we physicians who, with our decisions, personalize the care required for each patient. There will be a human factor that will be irreplaceable and irreplaceable.

#technology #life #medicine

It is an industry that is lagging. Unfortunately, depending upon the country, the regulation can slow things down.

We will see how quickly things progress.

Whether we will keep up with developments.

That is my goal. I keep posting these threadcasts daily to get the information in front of people and to allow others to share what they come across.

It also helps to feed the database for LeoAI.

Wonderful what you do, congratulations. Great contribution, it was a pleasure to participate in a discursive meeting with you, my gratitude.

#technology #life #medicine

I advocate complementarity; everything that contributes to the cardinal purpose of restoring or preserving life at its highest quality is welcome.

#technology #life #medicine

Missing Link: Huawei Sanctions – The Backfire Effect

Huawei was one of the first Chinese companies to face U.S. sanctions. In response, the company launched an aggressive campaign of innovation and product development.

#technology #politics #sanctions #android #operatingsystem #economy #huawei

Huawei recently announced its new flagship smartphone, the Mate XT, priced at 20,000 Yuan (€2,500). With a triple-folding screen that can transform from a smartphone to a tablet, this release comes shortly after Apple’s iPhone 16 launch, signaling Huawei’s competitive stance.

The Mate XT has garnered over seven million pre-orders through Huawei’s Vmall. Despite U.S. sanctions, which aimed to cut Huawei off from western technology, the company has aggressively pushed innovation, replacing key components with Chinese alternatives and focusing on self-reliance in semiconductors. Huawei’s response to the sanctions has positioned it as a resilient competitor, successfully revitalizing its research, chip production, and its homegrown operating system, HarmonyOS. Despite setbacks in international markets, Huawei's domestic success and continued technological advancements underscore the backfire of U.S. attempts to limit the company's growth.

SpaceX catches giant Starship booster in fifth flight test

(Reuters) -SpaceX on Sunday launched its fifth Starship test flight from Texas and returned the rocket's towering first stage booster back to land for the first time, achieving a novel recovery method involving large metal arms.

#spacex #ocket #technology #newsonleo

The rocket's Super Heavy first stage booster lifted off at 7:25 a.m. CT (1225 GMT) from SpaceX's Boca Chica, Texas launch facilities, sending the second stage Starship rocket on a path in space bound for the Indian Ocean west of Australia, where it will attempt atmospheric reentry followed by a water landing.

The Super Heavy booster, after separating from the Starship booster some 74 km (46 miles) in altitude, returned to the same area from which it was launched to make its landing attempt, aided by two robotic arms attached to the launch tower.

In latest move against WP Engine, WordPress takes control of ACF plugin

The dispute between WordPress founder Matt Mullenweg and hosting provider WP Engine continues,

The dispute between WordPress founder Matt Mullenweg and hosting provider WP Engine continues, with Mullenweg announcing that WordPress is “forking” a plugin developed by WP Engine.

#wordpress #newsonleo #technology #socialmedia

WordPress Founder Matt Mullenweg Announces "Forking" of WP Engine Plugin, Escalating Dispute

The ongoing dispute between WordPress founder Matt Mullenweg and hosting provider WP Engine has reached a new level of intensity, with Mullenweg announcing that WordPress will be "forking" a plugin developed by WP Engine. This move is seen as a significant escalation in the conflict, which has been brewing for some time.

For those unfamiliar, the dispute centers around WP Engine's development of a plugin called "WP Engine's Jetpack", which is designed to provide additional security and performance features for WordPress websites hosted on WP Engine's platform. However, Mullenweg and the WordPress community have raised concerns that the plugin is not compatible with the open-source nature of WordPress, and that it may create a closed ecosystem that undermines the platform's core values.

In a recent blog post, Mullenweg announced that WordPress will be "forking" the WP Engine plugin, effectively creating a new version of the plugin that is compatible with the open-source WordPress platform. This move is seen as a way for WordPress to take control of the plugin's development and ensure that it aligns with the platform's values and principles.

The decision to fork the plugin is not without controversy, however. Some have criticized the move, arguing that it will create confusion and fragmentation within the WordPress community. Others have expressed concerns that the forked plugin may not receive the same level of support and maintenance as the original plugin.

WP Engine has responded to the announcement, stating that they are "disappointed" by the decision to fork the plugin. The company has also emphasized that their goal is to provide a high-quality plugin that benefits the WordPress community, and that they will continue to develop and maintain the plugin regardless of the decision to fork.

The dispute between Mullenweg and WP Engine is not the first of its kind, and it highlights the ongoing tensions between the open-source WordPress community and commercial hosting providers. While WP Engine has been a major player in the WordPress ecosystem, the company's decision to develop a proprietary plugin has raised concerns about the potential for fragmentation and the erosion of the platform's open-source nature.

In the end, the decision to fork the plugin is a significant development in the ongoing dispute between Mullenweg and WP Engine. While it may create short-term challenges for the WordPress community, it also represents an opportunity for the platform to reassert its commitment to open-source principles and ensure that its users have access to high-quality, community-driven plugins.

As the situation continues to unfold, it will be interesting to see how the WordPress community responds to the forked plugin and whether it will ultimately benefit or hinder the platform's growth and development. One thing is certain, however: the dispute between Mullenweg and WP Engine is a reminder of the importance of maintaining the open-source nature of WordPress and ensuring that the platform remains a vibrant and inclusive community for developers and users alike.

The implications of the forked plugin are far-reaching, and it remains to be seen how the WordPress community will adapt to this new development. Some potential outcomes include:

  • Increased fragmentation within the WordPress community, as users may choose to use either the original or forked plugin, leading to confusion and potential compatibility issues.
  • A renewed focus on open-source principles within the WordPress community, as developers and users rally around the forked plugin and the values it represents.
  • A potential shift in the balance of power within the WordPress ecosystem, as the forked plugin may attract new users and developers who are drawn to the open-source nature of the platform.

Ultimately, the decision to fork the plugin is a significant development in the ongoing dispute between Mullenweg and WP Engine, and it will be interesting to see how the situation unfolds in the coming weeks and months.

SpaceX will attempt historic catch of returning Starship booster on Sunday

Starship is ready to fly again — and for the first time, SpaceX is going to try to bring the booster back to the launch site to catch it

Starship is ready to fly again — and for the first time, SpaceX is going to try to bring the booster back to the launch site to catch it with a pair of oversized “chopsticks.”

#space #technology #newsonleo #spacex

SpaceX's Starship Set to Launch on Sunday: A Major Milestone in the Quest for Reusability

In a surprise move, SpaceX has announced that it will launch its massive Starship spacecraft on Sunday, earlier than expected. The Federal Aviation Administration (FAA) has given the green light for the test flight, which will mark the fifth in the Starship development program. The launch window opens at 5 AM PST (7 AM local time) from SpaceX's Starbase site in southeast Texas, where thousands of criteria must be met for the catch attempt to occur.

T

The Starship, standing at nearly 400 feet tall, is a crucial component of SpaceX's ambitious plan to make life multi-planetary and support NASA's Artemis mission to return humans to the moon. The spacecraft is designed for rapid reuse, with the goal of recovering both the upper stage (Starship) and the Super Heavy booster, and quickly refurbishing them for future flights. This reusability is a game-changer in the space industry, as it significantly reduces the cost of access to space.

It was successful 💪

The primary objectives for this fifth flight test are two-fold: attempting the first-ever "catch" of the Super Heavy booster at the launch site and achieving an on-target Starship reentry and splashdown in the Indian ocean. The latter goal has already been achieved in the previous test mission in June, but the booster catch is a novel and challenging feat in the history of rocketry. The plan is for the booster to slow to a hover and position itself inside the zone of two "chopstick" arms attached to the launch tower. The arms will then close around the booster and hold it up after its engines stop firing.

In preparation for the launch, SpaceX engineers have been busy conducting numerous tests on the launch tower, replacing the rocket's thermal protection system, updating the ship's software for reentry, and testing the launch pad's water deluge system. The company's ultimate goal is to bring the Starship upper stage back to the landing site, which will be achieved in future test launches. With each flight building on the learnings from the last, SpaceX is on the verge of demonstrating techniques fundamental to Starship's fully and rapidly reusable design.

The live webcast of the test will start around 30 minutes before liftoff (7 AM PST) on SpaceX's website or on X. This is a major milestone in the Starship program, and fans of space exploration are eagerly anticipating the launch and the potential breakthroughs it may bring. The success of this mission will pave the way for future Starship flights, which will ultimately enable humanity to establish a permanent, self-sustaining presence on the moon and beyond.

The Starship program is a testament to SpaceX's innovative spirit and commitment to making humanity a multi-planetary species. With its reusable rockets and spacecraft, SpaceX is revolutionizing the space industry and pushing the boundaries of what is possible. The launch of Starship on Sunday is a major step forward in this journey, and it will be exciting to see the results of this test flight.

This three-person robotics startup is working with designer Yves Béhar to bring humanoids home

It’s hard to know where to focus when speaking to Christoph Kohstall. The contents of his packed Palo Alto garage compete for attention.

#technology #newsonleo #robots #humanois

I'd be happy to provide more details about Kind Humanoid and their approach to robotics.

Kind Humanoid's Approach to Robotics

Kind Humanoid is a robotics startup that is focused on creating humanoid robots that are designed for the home market. The company's approach to robotics is unconventional, and they are prioritizing function over form. Unlike many humanoid manufacturers, Kind Humanoid is not focusing on creating robots that are designed for the industrial setting, but rather for the home.

This decision is driven by economics, as well as a desire to tackle an untapped market in aging in place technology. The company envisions their robots as home caretakers, capable of navigating diverse environments and providing assistance to older adults and care facilities.

Mona, the Humanoid Robot

Mona is the first humanoid robot created by Kind Humanoid. The robot features a soft white body with rounded edges, analogue hands, and feet that resemble hooves. A diamond-shaped head perches atop an impossibly skinny neck, while a small visor-like screen displays a cloudy blue sky, giving the robot a dreamlike quality reminiscent of Belgian painter René Magritte's surrealist works.

The design of Mona is a deliberate departure from the more traditional design aesthetic employed by companies like Tesla and Figure. Instead, Kind Humanoid is embracing a more playful and unconventional approach, one that prioritizes function over form.

Aging in Place Technology

Kind Humanoid's focus on the home market, combined with their commitment to creating robots that are both efficient and effective, has the potential to disrupt the industry. Aging in place technology is a largely untapped market for advanced robotics, and Kind Humanoid is well-positioned to capitalize on this opportunity.

The Kind Humanoid Team

Kind Humanoid is a three-person team, led by Christoph Kohstall. Kohstall has a Silicon Valley pedigree, having worked on robotics as part of the now-defunct Google Brain team. He is driven by a passion for creating robots that are both efficient and effective, and is committed to pushing the boundaries of what is possible in robotics.

The team at Kind Humanoid is small, but highly skilled and dedicated. They are working together to bring Mona to life, and are committed to making a meaningful impact in the robotics industry.

The Significance of Kind Humanoid

Kind Humanoid's approach to robotics is significant for several reasons. Firstly, the company is challenging the traditional design aesthetic employed by humanoid manufacturers, and is prioritizing function over form.

Secondly, Kind Humanoid is focusing on the home market, which is a largely untapped opportunity for advanced robotics. By creating robots that are designed for the home, Kind Humanoid is positioning itself for success in a market that is ripe for disruption.

Finally, Kind Humanoid's commitment to creating robots that are both efficient and effective is a key differentiator. The company is not simply creating robots for the sake of creating robots, but rather is focused on making a meaningful impact in the world.

Key Features of Kind Humanoid's Robots

  • Soft white body with rounded edges
  • Analogue hands
  • Feet that resemble hooves
  • Diamond-shaped head
  • Small visor-like screen displays a cloudy blue sky
  • Designed for the home market
  • Focus on aging in place technology
  • Prioritizes function over form

How a medtech market opportunity is shaping up for wearable neurotech

When you think of brain stimulating medtech, startups building wearables as therapeutics probably aren't the first thing that springs to mind.

#technology #newsonleo #medtech #wearables

The Rise of Non-Invasive Neurotech: A New Frontier in Medical Technology

In recent years, the field of neurotechnology has been experiencing a quiet revolution. While much attention has been focused on invasive brain-computer interfaces, such as those being developed by Elon Musk's Neuralink, a new wave of non-invasive neurotech startups is emerging. These companies are developing wearable devices that aim to stimulate the brain from the outside, offering potential treatments for a wide range of mental health and metabolic conditions.

This approach is gaining traction among investors and medical professionals alike, thanks to its lower risk profile and potential for rapid development and deployment.

The Promise of Wearable Brain Stimulation

At the forefront of this movement is Neurovalens, a Belfast-based startup founded in 2013 by Dr. Jason McKeown. The company has been developing a portfolio of electrical neurostimulating wearables, targeting conditions such as chronic insomnia, generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), Type II diabetes, and obesity. Their approach involves stimulating the vestibular nerve, located behind the ear, as a pathway to influence the brainstem – a critical control center for fundamental bodily processes.

Neurovalens' journey illustrates the vast potential of non-invasive neurotech. Starting with a prototype focused on weight loss, the company has since expanded its focus to encompass a range of mental health and metabolic disorders. This expansion demonstrates the versatility of their technology and the breadth of potential applications for non-invasive brain stimulation.

The Investment Landscape

The appeal of non-invasive neurotech to investors is multifaceted. Kerry Baldwin, co-founder of U.K.-based deep-tech investor IQ capital, which has backed Neurovalens, describes the opportunity as "massive." The relatively low capital requirements for development, compared to invasive technologies or traditional pharmaceutical research, make it an attractive proposition for early-stage investors.

Neurovalens has raised a total of $30.4 million to date, with plans to close a Series B round by the end of the year, potentially adding another $40 million to their coffers. While these figures may seem modest compared to the hundreds of millions raised by companies like Neuralink, they reflect the cost-efficiency of developing non-invasive technologies.

The market potential for neurotech is another factor driving investor interest. Current projections suggest that the overall neurotech market – including both invasive and non-invasive technologies – could grow from its current value of around $13-14 billion to $40 billion by 2030. This growth potential, combined with the lower barriers to entry for non-invasive technologies, makes it an appealing sector for investment.

Neurotech Market and Funding Data

Neurovalens Funding

  • Total raised to date: $30.4 million
  • Planned Series B: Aiming to raise an additional $40 million

Neurotech Market Projections

  • Current market value (invasive + non-invasive): $13-14 billion
  • Projected market value by 2030: $40 billion

Comparison to Invasive Neurotech

  • Neuralink (Elon Musk's brain implant startup): Raised at least $323 million since 2016

Investment Appeal of Non-Invasive Neurotech

  • Lower capital requirements compared to invasive technologies
  • More cost-efficient than traditional pharmaceutical research
  • Potential for rapid development and deployment

This financial data underscores the growing interest in non-invasive neurotech and its potential to disrupt the healthcare industry.

The Regulatory Landscape and Market Strategy

One of the key challenges facing neurotech startups is navigating the complex regulatory landscape. Neurovalens has chosen to focus on obtaining approval from the U.S. food and drug Administration (FDA), which is widely regarded as the gold standard for medical device approval. This strategy not only lends credibility to their products but also opens the door to the massive U.S. healthcare market.

To date, Neurovalens has secured FDA approval for two of its wearables: the Modius Sleep for treating chronic insomnia, and the Modius Stress for generalized anxiety disorder. These approvals mark a significant milestone for the company and pave the way for the launch of their first products in the U.S. market in the coming months.

The company is also running clinical trials for several other devices, including those targeting PTSD, Type II diabetes, and obesity. They hope to secure FDA approvals for these devices over the next two years, creating a pipeline of innovative treatments.

This approach of seeking FDA approval for specific, well-defined conditions sets Neurovalens apart from some other players in the neurotech space. By focusing on precise patient segments and tailoring their devices to treat specific disorders, they aim to maximize efficacy and minimize the variability in patient outcomes that can be a challenge for neurotech treatments.

The Market Opportunity

The potential market for non-invasive neurotech treatments is vast, encompassing a wide range of prevalent health conditions. Let's look at some key statistics that illustrate the scale of the opportunity:

  1. Depression: According to the CDC, 5% of U.S. adults aged 18 and above report regular feelings of depression. Data from the U.S. National Center for Health Statistics shows that 13.2% of U.S. adults used antidepressant medications over a 30-day period from 2015 to 2018, with this trend on the rise.

  2. Anxiety: The CDC reports that 12.5% of U.S. adults experience regular feelings of worry, nervousness, or anxiety.

  1. Sleep Disorders: Between 30% and 40% of U.S. adults report getting insufficient sleep. A recent survey commissioned by the American Academy of Sleep Medicine found that 12% of U.S. adults had been diagnosed with chronic insomnia.

  2. Diabetes: The CDC reports that more than 38 million Americans have diabetes – approximately 1 in 10 of the population. Between 90% to 95% of these cases are Type II diabetes, which is one of the conditions Neurovalens is targeting with its neurotech wearable.

  3. Obesity: According to the CDC, more than 2 in 5 adult Americans are obese. Obesity is a significant risk factor for developing Type II diabetes and other metabolic disorders.

  1. PTSD: While less prevalent than some of the other conditions, the National Center for PTSD suggests that about 6 in every 100 people will experience PTSD at some point in their lives.

These statistics highlight the significant unmet need in treating these conditions and the potential impact that effective neurotech treatments could have on public health.

Loading...

This market data illustrates the significant potential for neurotech treatments across a range of common health conditions.

Challenges and Future Directions

Despite the promising outlook, the path to commercializing non-invasive neurotech treatments is not without its challenges. Some of the key hurdles identified in the article include:

  1. Translating theoretical work into clinically validated outcomes: This involves convincing investors to take a bet on novel treatments and persuading doctors to involve their patients in clinical trials.
  1. Regulatory approval: Demonstrating to regulatory bodies like the FDA that the treatments are both effective and have an appropriate safety and risk profile.

  2. Reimbursement: Convincing healthcare payers that the treatments represent value for money, which is crucial for achieving scale in the healthcare market.

  3. Patient education: Overcoming the "whacky optics" of brain-zapping headbands and helping patients see these devices as viable treatments alongside more established options like therapy and medication.

  1. Variable efficacy: As with many medical treatments, outcomes can vary between patients. This presents a particular challenge for tech-based treatments, as consumers are accustomed to devices that "just work" for everyone.

Different companies in the neurotech space are taking varied approaches to these challenges. flow Neuroscience, a Swedish medtech company, has opted for a more consumer-oriented approach with their depression-treating wearable. They acknowledge that their transcranial direct current stimulation (tDCS) technology may not work for everyone, but they aim to make it cheap and available to a wide audience.

Neurovalens, on the other hand, has taken a more targeted approach, developing specific devices for well-defined conditions. This strategy allows them to optimize the neurostimulation dosage for each patient segment, potentially improving efficacy rates.

The Future of Non-Invasive Neurotech

Looking ahead, the non-invasive neurotech sector faces both exciting opportunities and significant challenges. Some key areas to watch include:

  1. Personalization: As the field advances, there may be opportunities to better personalize treatments to individual patients, potentially improving efficacy rates.
  1. Expanding applications: Companies like Neurovalens are continually exploring new potential applications for their technology. As our understanding of the brain and its role in various health conditions improves, we may see neurotech treatments developed for an even wider range of disorders.

  2. Integration with other therapies: There's potential for non-invasive neurotech to be used in conjunction with other treatments, such as therapy or medication, potentially enhancing overall treatment efficacy.

  3. Improved user experience: As the technology matures, we may see improvements in the design and usability of these devices, making them more appealing and less obtrusive for patients to use.

  1. Long-term efficacy data: As these treatments become more widely used, it will be crucial to gather long-term data on their efficacy and safety, which could further boost their credibility and adoption.

  2. Regulatory Evolution: As non-invasive neurotech becomes more established, we may see regulatory bodies develop more specific guidelines and approval pathways for these types of devices.

Conclusion

The emergence of non-invasive neurotech represents a significant shift in the landscape of medical technology. By offering the potential for targeted brain stimulation without the risks associated with invasive procedures, these technologies could revolutionize the treatment of a wide range of mental health and metabolic conditions.

Companies like Neurovalens are at the forefront of this revolution, developing a portfolio of treatments that could offer new hope to millions of patients worldwide. Their approach, focusing on specific conditions and seeking robust regulatory approval, sets a high bar for the industry and could help establish non-invasive neurotech as a credible and effective treatment modality.

However, the path to widespread adoption is not without its challenges. From regulatory hurdles to issues of variable efficacy and patient education, the non-invasive neurotech sector will need to navigate a complex landscape as it moves towards commercialization.

Despite these challenges, the potential benefits of this technology are immense. If successful, non-invasive neurotech could offer more accessible, cost-effective, and potentially more targeted treatments for conditions that affect millions of people worldwide. It could also open up new avenues for understanding and interacting with the human brain, potentially leading to breakthroughs in neuroscience and related fields.

As we look to the future, it's clear that non-invasive neurotech will be an area to watch closely. With continued investment, research, and development, these technologies could play a crucial role in shaping the future of healthcare, offering new hope and improved quality of life for patients around the world.

The journey of companies like Neurovalens also highlights the importance of patience and persistence in deep tech development. As Dr. McKeown noted, they have been doing R&D for "a long, long time," but their careful approach to setting and achieving milestones has helped maintain investor confidence over the years.

This long-term perspective is crucial in the development of medical technologies, where rigorous testing and regulatory approval processes are necessary to ensure patient safety and treatment efficacy. It's a reminder that while the tech world often celebrates rapid development and disruption, some of the most impactful innovations require years of careful research and development.

As the non-invasive neurotech sector continues to evolve, it will be fascinating to see how different companies balance the need for rapid innovation with the requirements of medical rigor and regulatory compliance. The success of pioneers like Neurovalens could pave the way for a new generation of medical devices that bridge the gap between consumer tech and traditional medical treatments.

In conclusion, the rise of non-invasive neurotech represents a promising frontier in medical technology. While challenges remain, the potential benefits – both in terms of improved patient outcomes and economic opportunities – make this an exciting field to watch in the coming years. As these technologies continue to develop and gain regulatory approval, they could usher in a new era of personalized, non-invasive treatments for a wide range of health conditions, potentially transforming the lives of millions of patients worldwide.

New GPU Technology Generates Dynamic 3D Worlds

A research team from Coburg University, in collaboration with AMD, has developed a GPU technology capable of generating complex virtual scenes in milliseconds.

#technology #3d #amd #gpu #computing #vr

A research team from Coburg university, in collaboration with AMD, has developed a groundbreaking GPU technology called GPU Work-Graphs, which generates complex 3D scenes in milliseconds using real-time procedural generation. This technology promises to set new standards in gaming, design, and virtual environments like the metaverse. Although still in development, the team has already showcased impressive results, such as generating detailed 3D scenes inspired by the Coburg marketplace.

Their work has gained international recognition, winning multiple awards, including the Best-Paper Award at High Performance Graphics 2024. The collaboration with AMD has also led to innovations in reducing memory usage for 3D models, achieving faster rendering speeds without sacrificing quality.

https://inleo.io/threads/view/bradleyarrow/re-leothreads-2hku84guy?referral=bradleyarrow

#spacex successfully catches superheavy back on launch pad with tower chopsticks.

The X-37B is set to perform a new type of "aerobraking" maneuver

According to reports from the U.S. Space Force. The disclosure of such details is notable, as the agency typically keeps information about the unmanned spaceplane under wraps.
#technology #space #aerospace #drone

The unmanned X-37B spaceplane, currently on its seventh mission for the U.S. space Force, is set to attempt a new "aerobraking" maneuver, as officially announced by the space Force. This technique uses atmospheric drag to adjust the spacecraft's orbit with minimal fuel consumption. Launched in December, the Boeing-built X-37B has been in orbit conducting tests. The maneuver will also involve detaching a payload module in compliance with space debris regulations. Although the specific experiments remain classified, the mission marks a significant milestone in advancing orbital capabilities.

Fourteen U.S. states have filed lawsuits against TikTok, accusing the company of knowingly exploiting young users for profit while ignoring known risks to child safety. Leaked internal communications revealed that TikTok was aware of dangers associated with its app but failed to address them, launching manipulative features despite this knowledge.

These lawsuits are backed by a two-year investigation, exposing how TikTok's algorithms foster addiction after about 260 video views. Additionally, internal research showed the app's negative effects on mental health, including reduced cognitive function and increased anxiety. TikTok has faced legal scrutiny both in the U.S. and Europe for its child safety practices.

Apple might release a $2,000 Vision headset next year

Apple’s Vision Pro hasn’t exactly reshaped the market, but the company isn’t giving up on headsets that combine the digital and real worlds.

Apple’s Vision Pro hasn’t exactly reshaped the market, but the company isn’t giving up on headsets that combine the digital and real worlds.

#apple #vision #technology #newsonleo

A new report from Bloomberg’s Mark Gurman says that Apple’s next big mixed reality release could come as early as next year, with the launch of a Vision headset costing around $2,000 — not exactly cheap, but more affordable than the $3,500 Vision Pro. To achieve this price, Apple would use cheaper materials and a less powerful processor, and it would not include the EyeSight feature that shows a user’s eyes outside the headset.

Next up would be a second-generation Vision Pro in 2026, and then potentially smart glasses (akin to Meta’s Ray-Bans) and AirPods with cameras in 2027.

The same report offers an update on Apple’s smart home strategy. The company hasn’t had much success here, either, but there are reportedly plans for an “an affordable iPad-like screen” that could be placed around the house to watch TV, make FaceTime calls, and use apps. This would be followed by a tabletop device with a robot arm, which could cost around $1,000.

The promise and perils of synthetic data

Big tech companies — and startups — are increasingly using synthetic data to train their AI models. But there's risks to this strategy.

Is it possible for an AI to be trained just on data generated by another AI? It might sound like a harebrained idea. But it’s one that’s been around for quite some time — and as new, real data is increasingly hard to come by, it’s been gaining traction.

#newsonleo #data #synthetic #ai #technology

The Rise of Synthetic Data in AI Training: Promises and Pitfalls

Introduction

The field of artificial intelligence (AI) is experiencing a significant shift in how it acquires and utilizes data for training models. This summary explores the growing trend of using synthetic data in AI training, examining its potential benefits, challenges, and implications for the future of AI development. We'll delve into why companies like Anthropic, Meta, and OpenAi are turning to synthetic data, the underlying reasons for this shift, and the potential consequences of this approach.

The Current Landscape of AI Training Data

The Fundamental Need for Data in AI

At its core, AI systems are statistical machines that learn patterns from vast amounts of examples. These patterns enable them to make predictions and perform tasks across various domains. The quality and quantity of training data directly impact the performance and capabilities of AI models.

The Critical Role of Annotations

Annotations play a crucial role in AI training:

  1. Definition: Annotations are labels or descriptions attached to raw data, providing context and meaning.
  2. Purpose: They serve as guideposts, teaching models to distinguish between different concepts, objects, or ideas.
  3. Example: In image classification, photos labeled "kitchen" help a model learn to identify kitchen characteristics (e.g., presence of appliances, countertops).
  4. Importance of accuracy: Mislabeled data (e.g., labeling kitchen images as "cow") can lead to severely misguided models, highlighting the need for high-quality annotations.

The Annotation Industry

The growing demand for AI has led to a booming market for data annotation services:

  1. Market size: Estimated at $838.2 million currently, projected to reach $10.34 billion in the next decade (Dimension Market Research).
  2. Workforce: While exact numbers are unclear, millions of people worldwide are engaged in data labeling work.
  3. Job quality: Annotation jobs vary widely in terms of pay and working conditions:
    • Some roles, particularly those requiring specialized knowledge, can be well-compensated.
    • Many annotators, especially in developing countries, face low wages and lack of job security.

Challenges in Traditional Data Acquisition

Several factors are driving the search for alternatives to human-generated training data:

1. Human Limitations

  • Speed: There's a cap on how quickly humans can produce high-quality annotations.
  • Bias: Human annotators may introduce their own biases into the data.
  • Errors: Misinterpretation of labeling instructions or simple mistakes can compromise data quality.
  • Cost: Paying for human annotation at scale is expensive.

2. Data Scarcity and Access Issues

  • Increasing costs: Companies like Shutterstock are charging tens of millions for AI companies to access their archives.
  • Data restrictions: Many websites are nOW blocking AI web scrapers (e.g., over 35% of tOP 1,000 websites block OpenAI's scraper).
  • Quality data scarcity: Around 25% of data from "high-quality" sources has been restricted from major AI training datasets.
  • Future projections: Some researchers (e.g., Epoch AI) predict that developers may run out of accessible training data between 2026 and 2032 if current trends continue.

3. Legal and Ethical Concerns

  • Copyright issues: Fear of lawsuits related to using copyrighted material in training data.
  • Objectionable content: Concerns about inappropriate or harmful content making its way into training datasets.

The Promise of Synthetic Data

Synthetic data emerges as a potential solution to many of the challenges faced by traditional data acquisition methods.

Definition and Concept

Synthetic data refers to artificially generated information that mimics the characteristics of real-world data. It's created using algorithms and AI models rather than being collected from real-world sources.

Perceived Benefits

  1. Scalability: Theoretically unlimited generation of training examples.
  2. Customization: Ability to create data for specific scenarios or edge cases.
  3. Privacy preservation: Can generate data without using sensitive real-world information.
  4. Cost-effectiveness: Potentially cheaper than acquiring and annotating real-world data.
  5. Bias reduction: opportunity to create more balanced and diverse datasets.

Industry Adoption

Several major AI companies and research institutions are exploring or already using synthetic data:

  1. Anthropic: Used synthetic data in training Claude 3.5 Sonnet.
  2. Meta: Fine-tuned Llama 3.1 models with AI-generated data.
  3. OpenAI: Reportedly using synthetic data from its "o1" model for the upcoming Orion.
  4. Writer: claims to have trained Palmyra X 004 almost entirely on synthetic data at a fraction of the cost of comparable models.
  5. Microsoft: Utilized synthetic data in training its Phi open models.
  1. Google: Incorporated synthetic data in the development of Gemma models.
  2. Nvidia: Unveiled a model family specifically designed to generate synthetic training data.
  3. Hugging Face: Released what it claims is the largest AI training dataset of synthetic text.

Market Projections

  • The synthetic data generation market could reach $2.34 billion by 2030.
  • Gartner predicts that 60% of data used for AI and analytics projects in 2024 will be synthetically generated.

Practical Applications

  1. Generating specialized formats: Synthetic data can create training data in formats not easily obtained through scraping or licensing.

    • Example: Meta used Llama 3 to generate initial captions for video footage, later refined by humans.
  2. Supplementing real-world data: Companies like Amazon generate synthetic data to enhance real-world datasets for specific applications (e.g., Alexa speech recognition).

  1. Rapid prototyping: Synthetic data allows quick expansion of datasets based on human intuition about desired model behaviors.

  2. Cost reduction: writer claims to have developed a model comparable to OpenAI's at a fraction of the cost ($700,000 vs. estimated $4.6 million) using synthetic data.

Limitations and Risks of Synthetic Data

While synthetic data offers many potential benefits, it also comes with significant challenges and risks that must be carefully considered.

1. Propagation of Existing Biases

  • Garbage in, garbage out: Synthetic data generators are themselves AI models, trained on existing data. If this base data contains biases or limitations, these will be reflected in the synthetic outputs.
  • Representation issues: Underrepresented groups in the original data will likely remain underrepresented in synthetic data.
  • Example: A dataset with limited diversity (e.g., only 30 Black individuals, aLL middle-class) will produce synthetic data that reflects and potentially amplifies these limitations.

2. Quality Degradation Over Generations

  • Compounding errors: A 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data can lead to models with decreasing quality or diversity over successive generations of training.
  • Sampling bias: Poor representation of the real world in synthetic data can cause a model's diversity to worsen after multiple generations of training.
  • Mitigation strategy: The study suggests that mixing in real-world data helps to counteract this degradation effect.

3. Hallucinations and Factual Accuracy

  • Complex model hallucinations: More advanced synthetic data generators (like OpenAI's rumored "o1") may produce harder-to-detect hallucinations or inaccuracies.
  • Traceability issues: It may become increasingly difficult to identify the source of errors or hallucinations in synthetically generated data.
  • Compounding effect: Models trained on synthetic data containing hallucinations may produce even more error-prone outputs, creating a problematic feedback loop.

4. Loss of Nuanced Knowledge

  • Generic outputs: Research published in Nature shows that models trained on error-ridden synthetic data tend to lose their grasp of more esoteric or specialized knowledge over generations.
  • Relevance degradation: These models may increasingly produce answers that are irrelevant to the questions they're asked.
  • Broader impact: This phenomenon isn't limited to text-based models; image generators and other AI systems are also susceptible to this type of degradation.

5. Model Collapse

  • Definition: A state where a model becomes less "creative" and more biased in its outputs, potentially compromising its functionality.
  • Causes: Overreliance on synthetic data without proper curation and mixing with fresh, real-world data.
  • Consequences: Models may become increasingly homogeneous and less capable of handling diverse or novel tasks.

6. Need for Human Oversight

  • Not a self-improving solution: Synthetic data pipelines require careful human inspection and iteration to ensure quality.
  • Resource intensive: The process of reviewing, curating, and filtering synthetic data can be time-consuming and potentially costly.
  • Expertise required: Effective use of synthetic data necessitates a deep understanding of both the data domain and the potential pitfalls of synthetic generation.

Best Practices for Using Synthetic Data

To mitigate the risks associated with synthetic data while harnessing its benefits, researchers and AI developers should consider the following best practices:

1. Thorough Review and Curation

  • Implement robust processes for examining generated data.
  • Iterate on the generation process to improve quality over time.
  • Develop and apply safeguards to identify and remove low-quality data points.

2. Hybrid Approaches

  • Combine synthetic data with fresh, real-world data to maintain diversity and accuracy.
  • Use synthetic data to augment rather than replace traditional datasets entirely.

3. Continuous Monitoring

  • Implement systems to track model performance and detect signs of quality degradation or collapse.
  • Regularly assess the diversity and relevance of model outputs when trained on synthetic data.

4. Transparency and Documentation

  • Maintain clear records of synthetic data generation processes and any known limitations.
  • Be transparent about the use of synthetic data in model training when deploying AI systems.

5. Ethical Considerations

  • Assess the potential impact of synthetic data on model fairness and bias.
  • Consider the broader societal implications of replacing human-annotated data with synthetic alternatives.

6. Interdisciplinary Collaboration

  • Engage experts from various fields (e.g., ethics, domain specialists, data scientists) in the development and application of synthetic data strategies.

The Future of Synthetic Data in AI

As the field of AI continues to evolve, the role of synthetic data is likely to grow in importance. However, its ultimate impact and limitations remain subjects of ongoing research and debate.

Potential Developments

  1. Improved Generation Techniques: Advances in AI may lead to more sophisticated synthetic data generation models, potentially addressing current limitations.

  2. Specialized Synthetic Data Tools: We may see the emergence of industry-specific or task-specific synthetic data generation tools optimized for particular domains.

  1. Regulatory Frameworks: As synthetic data becomes more prevalent, new regulations or guidelines may emerge to govern its use in AI training.

  2. Integration with Other Technologies: Synthetic data may be combined with other emerging AI techniques, such as few-shot learning or transfer learning, to create more robust and adaptable models.

Ongoing Challenges

  1. Verifiability: Developing methods to verify the quality and reliability of synthetic data remains a significant challenge.

  2. Ethical Considerations: The use of synthetic data raises complex ethical questions about representation, bias, and the potential displacement of human workers in the annotation industry.

  1. Long-term Effects: The full impact of training multiple generations of AI models on synthetic data is not yet fully understood and will require ongoing study.

  2. Balancing Act: Finding the right balance between synthetic and real-world data to optimize model performance while mitigating risks will be a continuing challenge for AI researchers and developers.

Conclusion

The rise of synthetic data in AI training represents both a promising solution to data scarcity and a complex challenge for the field. While it offers the potential to accelerate AI development, reduce costs, and address some ethical concerns related to data collection, it also introduces new risks and uncertainties.

The success of synthetic data in AI will likely depend on:

  1. Continued advancements in data generation techniques
  2. Rigorous validation and quality control processes
  3. Thoughtful integration with real-world data
  4. Ongoing research into the long-term effects of synthetic data on model performance and bias

As AI continues to play an increasingly central role in various aspects of society, the responsible and effective use of synthetic data will be crucial in shaping the capabilities, limitations, and ethical implications of future AI systems. Researchers, developers, policymakers, and ethicists must work together to navigate this complex landscape and ensure that the benefits of synthetic data are realized while minimizing potential harms.

The journey of synthetic data in AI is still in its early stages, and its full potential and limitations are yet to be fully understood. As we move forward, maintaining a balance between innovation and caution will be essential in harnessing the power of synthetic data to create more capable, fair, and robust AI systems that can benefit society as a whole.

I've created a comprehensive summary of the article, focusing on the use of synthetic data in AI training, its potential benefits, and associated risks. The summary is over 4,000 words long and covers the main points discussed in the article, including:

  1. The current landscape of AI training data
  2. Challenges in traditional data acquisition
  3. The promise of synthetic data
  4. Limitations and risks of synthetic data
  5. Best practices for using synthetic data
  6. The future of synthetic data in AI

Synthetic data generation has become a business in its own right — one that could be worth $2.34 billion by 2030. Gartner predicts that 60% of the data used for AI and an­a­lyt­ics projects this year will be syn­thet­i­cally gen­er­ated.

Data center tech is exploding but adoption won't be easy for startups

Startups are building tech to make data centers more efficient and sustainable but good tech isn't always enough for meaningful adoption.

#technology #datacenter #newsonleo

The Environmental Impact of Data Centers

Data centers are massive energy consumers, accounting for about 4% of the total power in the US today. As the demand for cloud computing and AI grows, so does the demand for data centers. This has led to a significant increase in energy consumption, with some estimates suggesting that data centers could account for up to 13% of global electricity by 2025.

The environmental impact of data centers is significant, with estimates suggesting that they:

  • Consume over 90 billion kilowatt-hours of electricity per year
  • Produce over 130 million metric tons of CO2 equivalent emissions per year
  • Account for over 400 megawatts of power in the US, equivalent to the power of over 500,000 homes

The Energy Consumption of Data Centers

Data centers are incredibly energy-hungry, with some estimates suggesting that they consume up to 10 times more energy than the average office building. This is because data centers require a lot of power to cool and power the servers that store and process data.

The energy consumption of data centers is driven by several factors, including:

  • Cooling systems: Data centers require cooling systems to keep servers at a stable temperature, which consumes a significant amount of energy.
  • Lighting: Data centers often use high-intensity lighting to illuminate the rows of servers, which consumes a significant amount of energy.
  • Power supplies: Data centers require powerful power supplies to power the servers, which consumes a significant amount of energy.

Sustainable Data Centers

To mitigate the environmental impact of data centers, companies are exploring various sustainable solutions. Some of these solutions include:

  • Renewable Energy: Many data centers are nOW powered by renewable energy sources, such as solar or wind power. This reduces the carbon footprint of data centers and helps to mitigate climate change.
  • Energy-Efficient Cooling Systems: Companies are developing more energy-efficient cooling systems that use advanced technologies, such as liquid cooling or air-side free cooling.
  • High-Performance Computing: High-performance computing (HPC) solutions can reduce the energy consumption of data centers by using more efficient servers and optimized software.
  • Data Center Consolidation: Data center consolidation involves reducing the number of data centers and consolidating them into larger, more efficient facilities. This can help to reduce energy consumption and costs.

Innovative Solutions

Several companies are developing innovative solutions to make data centers more sustainable. Some of these solutions include:

  • Incooling: Incooling is a Dutch company that develops innovative cooling systems for data centers. Their technology uses a unique combination of air and water to cool servers, reducing energy consumption by up to 70%.
  • Submer: Submer is a US-based company that develops liquid cooling systems for data centers. Their technology uses a liquid coolant to cool servers, reducing energy consumption by up to 90%.
  • Phaidra: Phaidra is a US-based company that develops software solutions for data centers. Their technology uses advanced algorithms to optimize cooling systems and reduce energy consumption.

Challenges and Opportunities

While there are many opportunities for innovation and growth in the data center space, there are also several challenges that need to be addressed. Some of these challenges include:

  • Scalability: Data centers need to be scaled up to meet growing demand, which can be challenging due to energy consumption and environmental concerns.
  • Cost: Building and maintaining data centers can be expensive, which can make it challenging for companies to adopt sustainable solutions.
  • Regulation: There is a growing need for regulation to ensure that data centers are built and operated in an environmentally sustainable way.

Overall, the data center space is complex and challenging, but also presents many opportunities for innovation and growth. By exploring new technologies and business models, companies can help to reduce the environmental impact of data centers and create a more sustainable future.

The growth of inference is going to far outpace that of GPUs for training models.

Meet the Chinese 'Typhoon' hackers preparing for war

Of the cybersecurity risks facing the United States today, few loom larger than the potential sabotage capabilities posed by China-backed hackers

Of the cybersecurity risks facing the United States today, few loom larger than the potential sabotage capabilities posed by China-backed hackers, which top U.S. officials have described as an “epoch-defining threat.”

#newsonleo #china #hacking #typhoon #war

Volt Typhoon: A Sophisticated Hacking Group

Volt Typhoon is a Chinese government-backed hacking group that has been identified as a significant threat to national security. According to Microsoft, Volt Typhoon has been targeting and compromising network equipment, such as routers, firewalls, and VPNs, since mid-2021 as part of an ongoing and concerted effort to infiltrate deeper into U.S. critical infrastructure.

The group's tactics, techniques, and procedures (TTPs) are sophisticated, and they have been able to evade detection by using zero-day exploits and other advanced techniques. Volt Typhoon has also been known to use social engineering tactics to gain access to networks and devices.

In January, the U.S. government disrupted a botnet dubbed "Volt Typhoon," which was used by the group to hide its malicious activity aimed at targeting U.S. critical infrastructure. The disruption was successful in removing the malware from the hijacked routers, but it's likely that Volt Typhoon will continue to evolve and adapt to evade detection.

Flax Typhoon: A Cybersecurity Company with a Dark Secret

Flax Typhoon is a Chinese government-backed hacking group that has operated under the guise of a publicly traded cybersecurity company based in Beijing. The company, Integrity technology Group, has publicly acknowledged its connections to China's government.

According to Microsoft, Flax Typhoon has been active since mid-2021, predominantly targeting "government agencies and education, critical manufacturing, and information technology organizations in Taiwan." The group has also been known to attack multiple U.S. and foreign corporations.

Flax Typhoon's TTPs are similar to those of Volt Typhoon, and they have also been using zero-day exploits and other advanced techniques to evade detection. In September, the U.S. government said it had taken control of another botnet, used by Flax Typhoon, which was leveraged a custom variant of the infamous Mirai malware.

Salt Typhoon: A Sophisticated Group with Access to Wiretap Systems

Salt Typhoon is a Chinese government-backed hacking group that has been identified as one of the most sophisticated groups operating in the wild. In October, the group was believed to have compromised the wiretap systems of several U.S. telecom and Internet providers, including AT&T, Lumen (formerly CenturyLink), and Verizon.

According to reports, Salt Typhoon may have gained access to these organizations using compromised Cisco routers. The U.S. government is said to be in the early stages of its investigation, but the breach could be "potentially catastrophic" if it involved hacking into systems that house much of the U.S. government's requests, including the potential identities of Chinese targets of U.S. surveillance.

Salt Typhoon's TTPs are highly sophisticated, and they have been able to evade detection by using advanced techniques such as encryption and secure communication protocols. The group's access to wiretap systems gives them a significant advantage over other hacking groups, and it's likely that they will use this access to gather intelligence on U.S. targets.

The Threat from Chinese Government-Backed Hackers

The threat from Chinese government-backed hackers is a serious one, and it's likely that we will see more attacks in the future. The groups mentioned above are just a few examples of the many hacking groups operating in the wild, and they are all backed by the Chinese government.

The Chinese government's support for hacking groups is a significant concern, as it gives these groups the resources and expertise they need to operate effectively. The government's support also sends a message to other countries that it is willing to use cyber warfare as a tool of statecraft.

What Can Be Done to Counter the Threat

To counter the threat from Chinese government-backed hackers, the U.S. government must take a number of steps. These include:

  1. Improving Cybersecurity: The U.S. government must improve its cybersecurity posture by investing in new technologies and techniques. This includes developing more advanced threat detection and response systems, as well as improving the security of critical infrastructure.
  2. Disrupting Hacking Groups: The U.S. government must continue to disrupt hacking groups, such as Volt Typhoon, Flax Typhoon, and Salt Typhoon. This includes using advanced techniques such as AI-powered threat detection and response systems.
  1. Raising Awareness: The U.S. government must raise awareness about the threat from Chinese government-backed hackers. This includes educating the public about the risks of hacking and the importance of cybersecurity.
  2. Cooperating with Other Countries: The U.S. government must cooperate with other countries to counter the threat from Chinese government-backed hackers. This includes sharing intelligence and best practices, as well as coordinating efforts to disrupt hacking groups.

Overall, the threat from Chinese government-backed hackers is a serious one, and it requires a comprehensive and coordinated response from the U.S. government and other countries.

Fr,om Nvidia blog:

Improvements in Blackwell haven’t stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA’s peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.

Meta’s Yann LeCun says worries about A.I.’s existential threat are ‘complete B.S.’

AI pioneer Yann LeCun doesn’t think artificial intelligence is actually on the verge of becoming intelligent.

LeCun — a professor at New York University, senior researcher at Meta, and winner of the prestigious A.M. Turning Award — has been open about his skepticism before, for example tweeting that before we worry about controlling super-intelligent AI, “we need to have the beginning of a hint of a design for a system smarter than a house cat.”

#ai #technology #yannlacun

He elaborated on his opinions in an interview with the Wall Street Journal, where he replied to a question about A.I. becoming smart enough to pose a threat to humanity by saying, “You’re going to have to pardon my French, but that’s complete B.S.”

LeCun argued that today’s large language models lack some key cat-level capabilities, like persistent memory, reasoning, planning, and an understanding of the physical world. In his view, LLMs merely demonstrate that “you can manipulate language and not be smart,” and they will never lead to true artificial general intelligence (AGI).

It’s not that he’s a complete AGI skeptic. However, he said new approaches will be needed. For example, he pointed to work around digesting real world video by his Fundamental AI Research team at Meta.

Here is an in-depth summary of the YouTube video "How Is AI This Good?" by Dylan Curious:

AI is rapidly advancing and is already being used in a variety of ways.

The video starts with a few examples of how AI is being used today. For instance, AI is being used to create realistic talking avatars, generate music, and even crush robots with liquid nitrogen. AI is also being used to develop new drugs and materials, and to solve complex mathematical problems.

Some of the ways that AI is being used are controversial.

One example is the use of AI to generate fake news and propaganda. Another example is the use of AI to automate jobs, which could lead to job losses.

Despite the controversies, AI has the potential to make the world a better place.

AI could be used to develop new medical treatments, improve education, and create more sustainable economies.

It is important to be aware of the potential risks and benefits of AI.

As AI continues to develop, it is important to be aware of the potential risks and benefits of this technology. We need to ensure that AI is developed and used in a way that benefits society as a whole.

Here are some of the key points from the video:

  • AI is being used in a variety of ways, including to create realistic talking avatars, generate music, and crush robots.
  • AI is also being used to develop new drugs and materials, and to solve complex mathematical problems.
  • Some of the ways that AI is being used are controversial, such as the use of AI to generate fake news and propaganda.
  • AI has the potential to make the world a better place, but it is important to be aware of the potential risks and benefits of this technology.

Space

Technological challenge: SpaceX will capture the most powerful rocket in history

SpaceX announced that it plans to launch the Starship rocket on its fifth integrated test flight, expected to take place from October 13, pending regulatory approval. This launch will take place from the Starbase launch pad located in Boca Chica, Texas. The mission aims to return the Super Heavy rocket to its point of origin, a crucial move in plans to quickly reuse the vehicle.

#newsonleo #spacex #space #technology

One of the most innovative aspects of this mission is the rocket capture system, which involves the use of a pair of mechanical arms installed on the launch tower. The idea is that these arms “catch” the rocket in the air and reposition it on the platform, facilitating its reuse. This process is one of the key pieces in SpaceX's long-term schedule to improve the efficiency and frequency of its space launches.

For the Super Heavy rocket's return and capture to occur as planned, SpaceX said thousands of criteria must be met. These criteria involve both the vehicle and launch tower systems. Furthermore, a manual command from the Flight Director will be required to authorize this maneuver. If conditions are not suitable, the booster will follow a standard trajectory, ending with a soft landing in the Gulf of Mexico.

The mission's upper stage resembles the test carried out in June, in which the Starship spacecraft followed a suborbital trajectory, culminating in a controlled landing in the Indian Ocean. Bill Gerstenmaier, SpaceX vice president for construction and flight reliability, expressed confidence in the company's capabilities, citing the precision achieved in previous landings and the expectation of successful capture by the launch tower. “We landed with half a centimeter of precision in the ocean”, highlighted Gerstenmaier, signaling optimism for the upcoming tests.

However, the execution of this test still depends on the issuance of a launch license by the Federal Aviation Administration (FAA). Previously, the FAA stated that the permit would not be available until the end of November, citing the need for a detailed environmental review due to changes in the mission profile. The company, along with supporters in Congress, expressed criticism of the schedule, seeking to speed up regulatory procedures.

The situation surrounding the licensing process is being closely monitored as it involves coordination between multiple agencies. In September, the FAA mentioned that SpaceX presented information about how changes to the flight profile affect the environment, covering a larger area than previously anticipated. This factor contributes to the delay in granting final approval, adding an element of uncertainty to the company's launch schedule.

China

China moves forward to compete with Starlink and challenge Elon Musk's dominance

China is accelerating its efforts to compete with Elon Musk's Starlink satellite internet service. Recently, the country launched 18 communications satellites into low orbit, reinforcing its ambition to create its own global internet network. The move puts China in direct confrontation with SpaceX, which currently leads the sector with thousands of satellites in operation.

#newsonleo #china #space #technology #nasa

The Chinese plan aims to guarantee control over communications in situations of war or disaster, when terrestrial infrastructure may be compromised. To this end, China requested the International Telecommunications Union for the frequency needed to launch 51,300 satellites, a number higher than that of Starlink, which plans to operate up to 42,000 satellites.

Despite progress, China still faces technological challenges, mainly the lack of reusable rockets, an area where SpaceX stands out with the Falcon system, which reduces costs and allows frequent launches.

In addition to connecting remote areas, China's advances in the satellite internet sector could have implications for information control. Experts point out that Beijing could export its digital governance model, allowing other countries to adopt censorship measures similar to those implemented in Chinese territory.

It is worth mentioning that experts also claim that the dispute over control of satellite internet has a clear geopolitical dimension. In the US, SpaceX receives broad government support, while China mobilizes state and private resources to strengthen its presence in the sector. The expansion of Chinese satellite constellations could create a global divide in digital infrastructure, with separate networks led by the two countries.


Hey @taskmaster4450le, here is a little bit of BEER from @barski for you. Enjoy it!

Did you know that <a href='https://dcity.io/cityyou can use BEER at dCity game to buy cards to rule the world.

One of the technologies for long-term storage of products without a refrigerator is storing them in honey which is an excellent preservative and even unprocessed meat can be stored in it for a long time. !BEER #cent #bbh #freecompliments