You are viewing a single comment's thread from:

RE: LeoThread 2024-10-05 09:19

in LeoFinance4 months ago

Here is the daily technology #threadcast for 10/5/24. We aim to educate people about this crucial area along with providing information of what is taking place.

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  
There are 2 pages
Pages

The Guardian: Meta announces new AI model that can generate video with sound


https://www.theguardian.com/technology/2024/oct/04/meta-ai-video-generator-movie-gen

Meta Unveils Movie Gen: A Revolutionary AI Model Capable of Creating Realistic videos and Audio Clips
In a significant breakthrough in [artificial intelligence]

Technology, Meta, the parent company of Facebook and Instagram, has announced the development of Movie Gen, a cutting-edge AI model that can generate realistic-seeming video and audio clips in response to user prompts.(https://inleo.io/@leoglossary/leoglossary-artificial-intelligence)

This innovation positions Meta as a major player in the rapidly evolving field of generative AI, alongside leading startups like OpenAi and ElevenLabs.
Key features of Movie Gen

Movie Gen boasts impressive capabilities, including:

Realistic Video Generation: The model can create videos up to 16 seconds long, featuring realistic animations, such as animals swimming and surfing, or people performing actions like painting.
Audio Generation: Movie Gen can produce audio clips up to 45 seconds long, complete with background music and sound effects synchronized with the video content.

Video Editing: Users can edit existing videos using the model, allowing for seamless integration of new elements, such as inserting objects or changing environments.

Customization: Movie Gen can utilize users' photos to create personalized videos.

Performance and Comparison

Meta's benchmark tests demonstrate that Movie Gen performs favorably against offerings from prominent startups, including Runway, OpenAI, ElevenLabs, and Kling. Blind tests conducted by the company showed promising results, with Movie Gen's outputs often indistinguishable from real videos.

Applications and Partnerships

While Meta has no immediate plans to release Movie Gen for open use by developers, the company is collaborating with the entertainment community and content creators to explore potential applications. Movie Gen is expected to be integrated into Meta's products in 2025.

The entertainment industry has expressed interest in leveraging generative AI video technology to enhance and expedite film-making. However, concerns surrounding copyright infringement and the potential misuse of AI-generated deepfakes have sparked debate.

Concerns and Controversies

Lawmakers and industry professionals have raised concerns about the risks associated with AI-generated fakes, particularly in the context of elections and copyright infringement.

OpenAI's approach to training its Sora model has faced criticism, with actress Scarlett Johansson accusing the company of imitating her voice without permission.

The Future of AI-Generated Content

As Meta and other companies continue to push the boundaries of generative AI, the lines between reality and fiction are becoming increasingly blurred. The integration of Movie Gen into Meta's products and its potential applications in the entertainment industry will likely have far-reaching implications for the future of content creation.

About Movie Gen's Development

According to Meta's blog post and research paper, the company utilized a combination of licensed and publicly available datasets to build Movie Gen. This approach has sparked discussion about the ethics of training AI models on copyrighted materials without permission.

As the technology continues to evolve, it remains to be seen how Meta and other industry players will address concerns surrounding AI-generated content. One thing is certain, however: Movie Gen represents a significant leap forward in the field of generative AI, and its impact will be felt across various industries in the years to come.

What is Waymo?

Waymo, a subsidiary of Alphabet Inc., the parent company of Google, is a pioneering company in the development and deployment of autonomous vehicle technology. As a leader in the field of self-driving cars, Waymo has been at the forefront of the autonomous vehicle revolution, pushing the boundaries of innovation and safety.

Founded in 2009 as the Google Self-Driving Car project, Waymo was initially a project within Google. In 2016, Google rebranded the project as Waymo and spun it off as a separate company, marking a significant milestone in the company's journey towards autonomous driving.

Waymo's cutting-edge technology is based on a combination of advanced sensors, sophisticated software, and precise mapping technology. The company's autonomous vehicles are equipped with a range of sensors, including high-definition cameras, lidar (light detection and ranging), radar, and ultrasonic sensors, which work together to detect and respond to their surroundings. This advanced sensor suite enables Waymo's vehicles to navigate roads and traffic with ease, even in complex and dynamic environments.

Waymo's autonomous vehicles are designed to operate in a variety of environments, including urban and rural areas, and in a range of weather conditions, from sunny days to heavy rain and snow. The company's robust mapping technology allows its vehicles to understand their surroundings and navigate through complex environments, such as busy city streets and winding country roads.

Since 2009, Waymo has been testing its autonomous vehicles on public roads, accumulating millions of miles of testing data and refining its technology to ensure the highest level of safety and reliability. The company has also partnered with several major automakers, including Fiat Chrysler automobiles (FCA) and Jaguar Land Rover, to integrate its technology into their vehicles, further expanding its reach and impact.

Waymo has launched several commercial services, including:

  1. Waymo One: A self-driving taxi service that operates in Phoenix, Arizona, providing passengers with a safe and convenient transportation option.
  2. Waymo Via: A self-driving delivery service that operates in Phoenix, Arizona, enabling businesses to transport goods and packages efficiently and safely.
  1. Waymo driver: A self-driving technology that can be integrated into existing vehicles, allowing automakers to offer autonomous driving capabilities to their customers.

Waymo's technology has the potential to revolutionize the way we travel, with benefits including improved road safety, reduced traffic congestion, and increased mobility for people with disabilities. As the company continues to innovate and push the boundaries of autonomous driving, it is poised to make a significant impact on the future of transportation.

What is an intervention?

The intervention process used by Waymo is a complex system that involves multiple sensors, machine learning algorithms, and human oversight. Here's a more detailed explanation of how it works:

Sensor Data:

Waymo's autonomous vehicles are equipped with a suite of sensors that provide a 360-degree view of the environment. These sensors include:

  1. Cameras: Waymo uses high-resolution cameras that capture images of the road and surrounding environment. These cameras are mounted on the vehicle's roof and sides, providing a wide field of view.
  1. Lidar: Lidar (Light Detection and Ranging) sensors use laser light to create high-resolution 3D maps of the environment. This data is used to detect objects, lanes, and other features on the road.
  2. Radar: Radar sensors use radio waves to detect objects and measure their distance and speed.
  3. Ultrasonic sensors: Ultrasonic sensors use high-frequency sound waves to detect objects and measure their distance.

This sensor data is processed in real-time by Waymo's autonomous driving system, which uses machine learning algorithms to interpret the data and make decisions about vehicle control.

Machine Learning Algorithms:

Waymo's autonomous driving system uses a combination of machine learning algorithms to interpret the sensor data and make decisions about vehicle control. These algorithms include:

  1. Object detection: Waymo's algorithms use the sensor data to detect objects on the road, such as cars, pedestrians, and bicycles.
  2. Track estimation: The algorithms use the sensor data to estimate the vehicle's position and trajectory on the road.
  3. Motion forecasting: The algorithms use the sensor data to predict the motion of other vehicles and pedestrians on the road.
  4. Decision-making: The algorithms use the sensor data and motion forecasting to make decisions about vehicle control, such as acceleration, braking, and steering.

Human Oversight:

While Waymo's autonomous driving system is designed to make decisions autonomously, human oversight is still an important part of the intervention process. Waymo has a team of human safety engineers and operators who monitor the vehicles and intervene as needed.

These humans can provide additional context and expertise to help determine when to intervene. For example, if the autonomous driving system detects a potential safety issue, the human operator can review the sensor data and make a decision about whether to intervene.

Intervention Criteria:

Waymo has established a set of criteria for determining when to intervene. These criteria include:

  1. Safety: If the autonomous driving system detects a potential safety issue, such as a pedestrian or vehicle in the road, it will take control of the vehicle to ensure the safety of the passengers and other road users.
  2. Technical issues: If the autonomous driving system encounters a technical issue, such as a software glitch or a hardware malfunction, Waymo may intervene to diagnose and resolve the issue.
  3. Emergency situations: In the event of an emergency, such as a medical emergency or a vehicle accident, Waymo may intervene to take control of the vehicle and provide assistance.

Intervention Types:

There are several types of interventions that Waymo may use, including:

  1. Manual takeover: In this type of intervention, the human operator takes control of the vehicle to ensure the safety of the passengers and other road users.
  2. Semi-autonomous intervention: In this type of intervention, the autonomous driving system takes control of the vehicle, but the human operator can still intervene if necessary.
  3. Autonomous intervention: In this type of intervention, the autonomous driving system takes full control of the vehicle without any human intervention.

Overall, Waymo's intervention process is designed to prioritize safety and ensure the well-being of the passengers and other road users. While the system is designed to operate autonomously, human oversight is still an important part of the process.

Crypto is going to be big in the post labor economics world.

'World-first' indoor vertical farm to produce 4M pounds of berries a year

Major steps towards better, sustainable and affordable food production free of environmental challenges have been taken, with the "world's first farm to grow indoor, vertically farmed berries at scale" opening in Richmond, VA. It's backed by an international team of scientists that see this new phase of agriculture as a way to ease global food demands.

#newsonleo #food #technology #verticalfarming

Overview of the Farm

The Plenty Richmond farm is a cutting-edge vertical farming facility located in Richmond, Virginia. The farm is designed to produce over four million pounds of strawberries annually, grown indoors in towering 30-foot-tall towers. The facility covers an area of less than a single acre, making it an exemplary model of efficient land use.

Controlled Environment

The farm's controlled environment is the key to its success. The facility is designed to optimize pollination and fruit production, with a highly controlled atmosphere that mimics the natural conditions found in strawberry fields. The grow room features 12 separate growing "rooms," each with its own temperature, light, and humidity controls.

The controlled environment allows for precise management of the growing conditions, resulting in more uniform fruit production and reduced waste. The farm's team of experts has carefully calibrated the environment to optimize the growth of strawberries, taking into account factors such as light intensity, temperature, and humidity.

Dynamic Lighting and Sensors

One of the most innovative features of the Plenty Richmond Farm is its use of dynamic lighting and sensors. The farm's lighting system is designed to mimic the natural cycles of photosynthesis, with sensors that turn lights on and off throughout the day to optimize energy usage.

By harnessing the power of dynamic lighting and sensors, the farm can tap into cheaper energy sources at off-peak times, maximizing the advantages of vertical farming while minimizing its environmental footprint. This approach has been shown to increase crop yields and reduce energy costs, making it an attractive solution for farmers and consumers alike.

Pollination Optimization

Pollination is a critical factor in strawberry production, and the Plenty Richmond Farm has developed innovative solutions to optimize pollination. The farm's team has engineered the environment to promote efficient pollination, resulting in more uniform fruit production and reduced waste.

By optimizing pollination, the farm can reduce the need for external pollination methods, such as bees, and minimize the risk of environmental contamination. This approach also reduces the risk of disease and pests, making it a more sustainable and efficient solution for strawberry production.

Reducing Environmental Impact

The Plenty Richmond Farm is designed to minimize its environmental impact, with a number of innovative features that reduce waste and minimize the use of resources. The farm's controlled environment and shorter supply chain also reduce the risk of environmental contamination, making it an attractive solution for consumers seeking environmentally friendly produce.

Some of the ways in which the farm reduces environmental impact include:

  • 97% less land usage: The farm covers an area of less than a single acre, making it an exemplary model of efficient land use.
  • Up to 90% less water usage: The farm's controlled environment and advanced irrigation system minimize water usage, reducing the risk of drought and environmental contamination.
  • No pesticides: The farm's controlled environment and optimized growing conditions eliminate the need for pesticides, reducing the risk of environmental contamination and minimizing the impact on beneficial insects.
  • Shorter supply chain: The farm's controlled environment and shorter supply chain reduce the risk of environmental contamination and minimize the impact on beneficial insects.

Partnership with Driscoll's

The Plenty Richmond Farm is a partnership between Plenty and global company Driscoll's, renowned for its high-quality strawberries. The partnership aims to deliver consistent, premium strawberries to grocery store shelves in early 2025.

Driscoll's CEO, Soren Bjorn, sees the partnership as a powerful step forward in driving category growth and catering to the increasingly demanding needs of consumers. "We're excited to join forces with Plenty to consistently deliver an ultra-premium Driscoll's strawberry year-round," said Bjorn.

Innovative Technologies

The Plenty Richmond Farm is equipped with a range of innovative technologies that enable efficient and sustainable strawberry production. Some of the key technologies used on the farm include:

  • LED lighting: The farm uses LED lighting to provide optimal light intensity and spectrum for strawberry growth.
  • Advanced irrigation system: The farm's advanced irrigation system minimizes water usage and ensures that strawberries receive the precise amount of moisture they need to thrive.
  • Climate control: The farm's climate control system ensures that strawberries are grown in optimal conditions, with precise temperature, humidity, and air circulation controls.
  • ** Sensors and monitoring systems**: The farm's sensors and monitoring systems enable real-time monitoring of growing conditions, allowing for precise adjustments to be made to optimize strawberry production.

Future Plans

The Plenty Richmond Farm is just the beginning of Plenty's ambitious plans for vertical farming. The company is committed to expanding its operations and developing new technologies that enable efficient and sustainable strawberry production.

Some of the future plans for Plenty include:

  • Expansion of operations: Plenty plans to expand its operations to new locations, enabling the production of strawberries in other regions and countries.
  • Development of new technologies: Plenty is committed to developing new technologies that enable efficient and sustainable strawberry production, including advanced LED lighting systems and climate control systems.
  • Partnerships and collaborations: Plenty plans to partner with other companies and organizations to develop new technologies and expand its operations, enabling the production of high-quality strawberries on a global scale.

Overall, the Plenty Richmond Farm is a pioneering example of innovative vertical farming, demonstrating the potential for climate-agnostic, sustainable, and efficient food production. By harnessing the power of technology and collaborating with industry leaders, Plenty is poised to revolutionize the strawberry industry and inspire a new generation of agricultural innovators.

AI begins its ominous split away from human thinking

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is nOW starting to take a much more experimental approach that could catapult machine learning way past humans.

Remember Deepmind's AlphaGo? It represented a fundamental breakthrough in AI development, because it was one of the first game-playing AIs that took no human instruction and read no rules.

Instead, it used a technique called self-play reinforcement learning (RL) to build up its own understanding of the game. Pure trial and Error across millions, even billions of virtual games, starting out more or less randomly pulling whatever levers were available, and attempting to learn from the results.

Within two years of the start of the project in 2014, AlphaGo had beaten the European Go champion 5-0 – and by 2017 it had defeated the world's #1 ranked human player.

AlphaGo soundly defeated many-times world-champion Go master Lee Sedol in 2016, using strange moves that would be incredibly rare from a human opponent – and indeed, that evolved the human understanding of the game
AlphaGo soundly defeated many-times world-champion Go master Lee Sedol in 2016, using strange moves that would be incredibly rare from a human opponent – and indeed, that evolved the human understanding of the gameDeepmind

At this point, Deepmind unleashed a similar AlphaZero model on the chess world, where models like Deep Blue, trained on human thinking, knowledge and rule sets, had been beating human grandmasters since the 90s. AlphaZero played 100 matches against the reigning AI champion, Stockfish, winning 28 and tying the rest.

Human thinking puts the brakes on AI
Deepmind started dominating these games – and shoji, Dota 2, Starcraft II and many others – when it jettisoned the idea that emulating a human was the best way to get a good result.

Bound by different limits than us, and gifted with different talents, these electronic minds were given the freedom to interact with things on their own terms, play to their own cognitive strengths, and build their own ground-up understanding of what works and what doesn't.

AlphaZero doesn't know chess like Magnus Carlssen does. It's never heard of the Queen's Gambit or studied the great grandmasters. It's just played a shit-ton of chess, and built up its own understanding against the cold, hard logic of wins and losses, in an inhuman and inscrutable language it created itself as it went.

As a result it's so much better than any model trained by humans, that it's an absolute certainty: no human, and no model trained on human thinking will ever again have a chance in a chess game if there's an advanced reinforcement learning agent on the other side.

And something similar, according to people that are better-placed to know the truth than anyone else on the planet, is what's just started happening with the latest, greatest version of ChatGPT.

OpenAI's new o1 model begins to diverge from human thinking
ChatGPT and other Large Language Model (LLM) AIs, like those early chess AIs, has been trained on as much human knowledge as was available: the entire written output of our species, give or take.

And they've become very, very good. aLL this palaver about whether they'll ever achieve Artificial General Intelligence... Good grief, can you picture a human that could compete with GPT-4o across the breadth of its capabilities?

But LLMs specialize in language, not in getting facts right or wrong. That's why they "hallucinate" – or BS – giving you wrong information in beautifully phrased sentences, sounding as confident as a news anchor.

Language is a collection of weird gray areas where there's rarely an answer that's 100% right or wrong – so LLMs are typically trained using reinforcement learning with human feedback. That is, humans pick which answers sound closer to the kind of answer they were wanting. But facts, and exams, and coding – these things do have a clear success/fail condition; either you got it right, or you didn't.

And this is where the new o1 model has started to split away from human thinking and start bringing in that insanely effective AlphaGo approach of pure trial and error in pursuit of the right result.

o1's baby steps into reinforcement learning
In many ways, o1 is pretty much the same as its predecessors – except that OpenAi has built in some 'thinking time' before it starts to answer a prompt. During this thinking time, o1 generates a 'chain of thought' in which it considers and reasons its way through a problem.

And this is where the RL approach comes in – o1, unlike previous models that were more like the world's most advanced autocomplete systems, really 'cares' whether it gets things right or wrong. And through part of its training, this model was given the freedom to approach problems with a random trial-and-error approach in its chain of thought reasoning.

It still only had human-generated reasoning steps to draw from, but it was free to apply them randomly and draw its own conclusions about which steps, in which order, are most likely to get it toward a correct answer.

And in that sense, it's the first LLM that's really starting to create that strange, but super-effective AlphaGo-style 'understanding' of problem spaces. In the domains where it's NOW surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what's a useful reasoning step and what's not.

So in topics where there's a clear right and wrong answer, we're now beginning to see this alien intelligence take the first steps past us on its own two feet. If the games world is a good analogy for real life, then friends, we know where things go from here. It's a sprinter that'll accelerate forever, given enough energy.

But o1 is still primarily trained on human language. That's very different from truth – language is a crude and low-res representation of reality. Put it this way: you can describe a biscuit to me all day long, but I won't have tasted it.

So what happens when you stop describing the truth of the physical world, and let the AIs go and eat some biscuits? We'll soon begin to find out, because AIs embedded in robot bodies are now starting to build their own ground-up understanding of how the physical world works.

AI's pathway toward ultimate truth
Freed from the crude human musings of Newton, and Einstein, and Hawking, embodied AIs will take a bizarre AlphaGo-style approach to understanding the world. They'll poke and prod at reality, and observe the results, and build up their own theories in their own languages about what works, what doesn't, and why.

They won't approach reality like humans or animals do. They won't use a scientific method like ours, or split things into disciplines like physics and chemistry, or run the same kinds of experiments that helped humans master the materials and forces and energy sources around them and dominate the world.

Embodied AIs given the freedom to learn like this will be hilariously weird. They'll do the most bizarre things you can think of, for reasons known only to themselves, and in doing so, they'll create and discover new knowledge that humans could never have pieced together.

Unshackled from our language and thinking, they won't even notice when they break through the boundaries of our knowledge and discover truths about the universe and new technologies that humans wouldn't stumble across in a billion years.

We're granted some reprieve here; this isn't happening in a matter of days or weeks, like so much of what's going on in the LLM world.

Reality is the highest-resolution system we know of, and the ultimate source of truth. But there's an awful lot of it, and it's also painfully slow to work with; unlike in simulation, reality demands that you operate at a painfully slow one minute per minute, and you're only allowed to use as many bodies as you've actually built.

So embodied AIs attempting to learn from base reality won't initially have the wild speed advantage of their language-based forebears. But they'll still be a lot faster than evolution, with the ability to pool their learnings among co-operative groups in swarm learning.

Companies like Tesla, Figure and Sanctuary AI are working feverishly at building humanoids to a standard that's commercially useful and cost-competitive with human labor. Once they achieve that - if they achieve that - they'll be able to build enough robots to start working on that ground-up, trial-and-error understanding of the physical world, at scale and at speed.

They'll need to pay their way, though. It's funny to think about, but these humanoids might learn to master the universe in their downtime from work.

Apologies for these rather esoteric and speculative thoughts, but as I keep finding myself saying, what a time to be alive!

OpenAI's o1 model might not look like a quantum leap forward, sitting there in GPT's drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.

For a wonderful deeper dive into how reinforcement learning makes o1 a step-change in the development of AI, I highly recommend the video below, from the excellent AI Explained channel.

Facebook launches a Gen Z-focused redesign

Facebook wants to woo more younger users to join its social network, instead of spending all their time on TikTok, Instagram, and other social apps.

#technology #newsonleo #meta #facebook #genz

The New "Local" Tab

Facebook is introducing a new tab called "Local" that will pull in local content from across platforms like Marketplace, Groups, and Events into a single section. This tab will initially be available in testing in select U.S. cities, including Austin, New York City, Los Angeles, Washington D.C., Chicago, Charlotte, Dallas, Houston, San Francisco, and Phoenix.

The Local tab will provide users with a centralized hub for finding local content, such as:

  • Nearby activities and events
  • Local groups offering items for sale or for free
  • Local recommendations for new neighborhood hot spots
  • Other local information and resources

This change is aimed at making Facebook more appealing to users who want to connect with their local community or find local events and activities.

The Swipeable Local Section

In addition to the Local tab, Facebook will also highlight a user's local community in other ways on the platform. A new, swipeable section will appear in the user's Facebook Feed, showcasing interesting posts and information from the area.

This section will include:

  • Local events
  • Local Facebook Groups
  • Notable people or businesses in the area
  • Items for sale on Marketplace
  • Other local information and resources

The swipeable section will allow users to easily browse and discover local content, making it easier to find and connect with others in their community.

The New "Explore" Tab

Facebook is introducing a new "Explore" tab that will be focused on personalized recommendations. This section will be powered by an algorithm that will surface content that entertains, as well as connects users to interests, even if they're narrowly defined.

The Explore tab will be a prominent feature of the redesigned app, becoming the fifth button on the bottom navigation bar on iOS (top bar on Android).

The Explore tab will feature two main sections:

  • A "For You" feed that will surface content that is relevant to the user's interests and preferences
  • A "Nearby" section that will feature local content, such as events, groups, and businesses

The Explore tab is aimed at providing users with a more personalized experience, where they can discover new content and connect with others who share similar interests.

The Updated video Tab

Facebook is updating its Video tab to offer a full-screen video player that will allow users to watch short-form, long-form, and live videos in one place. This will give reels a more prominent place on Facebook, the company notes, and reflects the use of video by younger users.

The updated Video tab will also feature:

  • A "Reels" section that will showcase short-form videos
  • A "Long-Form" section that will feature longer videos
  • A "Live" section that will feature live videos

This change is aimed at making Facebook more appealing to users who want to watch videos, and providing a more seamless experience for users who prefer to consume video content on the platform.

The Updated Facebook Events and Groups

Facebook Events and Groups are being updated to provide users with a more streamlined experience.

Facebook Events will receive an upgrade by offering users both a Weekly and Weekend Digest of upcoming events based on their interests. These will come via a Facebook notification, the company says.

Users will also be able to invite Instagram followers to events created on Facebook, as well as via SMS and email invites to those who have not registered an account on the site.

The updated Facebook Groups feature will include:

  • A new AI feature that will help members of groups find answers to questions — including those that have previously been asked
  • A "Group AI" offering that will be enabled by a Group's admin, which will introduce a chat-like interface where Group members can ask questions and be linked to relevant group posts

The Groups update in particular targets a major area of Facebook for the next generation, as Groups nOW attract a large number of users.

The New "Matchmaker" Feature on Facebook Dating

Facebook Dating will add a "Matchmaker" feature that will allow up to 5 friends to swipe through potential matches for you. The company says this feature was introduced because Facebook Dating conversations had increased 24% year-over-year among young adults in the U.S. and Canada.

The Matchmaker feature is aimed at providing users with a more social and interactive experience on Facebook Dating, and helping to increase the number of users who use the platform.

The New Meta AI-Powered "Imagine Yourself" Image-Generation Feature

Facebook is integrating a new Meta AI-powered "Imagine Yourself" image-generation feature into its platform. This feature will allow users to generate custom images based on their interests and preferences.

The "Imagine Yourself" feature will be available on Facebook's Feed, stories, and on the user's profile page, as previously announced.

The New AI Comment Summaries

Facebook is rolling out AI comment summaries across public Groups, Pages, and Creators. These summaries will provide users with a quick and easy way to understand the context and meaning of comments, without having to read the entire comment thread.

While AI comment summaries may be helpful for some users, they may also detract from the more engaging and interactive experience of scrolling through comment Threads.

Overall, the changes that Facebook is making to its platform aim to make the network more appealing to the next generation of users, who use the site in a different way than the young adults who first joined the site in their youth, and are NOW approaching middle age.

Tesla Cybertruck non-Foundation series invites are going out. That means those who get the invite can configure their vehicles. It is around $80K.

#newsonleo #technology #tesla

The 'Mozart of Math' isn't worried about AI replacing math nerds -- ever

Terence Tao, a UCLA professor considered to be the “world’s greatest living mathematician,” last month compared ChapGPT’s o1 reasoning model to a “mediocre, but not completely incompetent” graduate student that could correctly answer a complex analysis problem with “a lot of hints and prodding.”

#newsonleo #ai #mathematics #technology #chatgpt

AI might never beat its human teachers, he now tells The Atlantic. “One key difference [today] between graduate students and AI is that graduate students learn. You tell an AI its approach doesn’t work, it apologizes, it will maybe temporarily correct its course, but sometimes it just snaps back to the thing it tried before.”

The good news for math prodigies, adds Tao, is that AI and mathematicians will more likely always be collaborators, where instead of replace math nerds, AI will enable them to explore large-scale, previously unreachable problems. Says Tao of the future, “You might have a project and ask, ‘What if I try this approach?’ And instead of spending hours and hours actually trying to make it work, you guide a GPT to do it for you.”

World's first 3D-printed hotel rises in the Texas desert

The El Cosmico project, exploring its innovative design, construction process, and sustainable approach.

Design and Architecture

The El Cosmico project is a collaboration between Icon, a leading 3D-printed architecture firm, and the Bjarke Ingels Group (BIG), a renowned architecture firm known for its sustainable and innovative designs. BIG's boss, Bjarke Ingels, explains that the design for El Cosmico is a "conversation between El Cosmico's past and future." The result is a series of curvilinear structures that blend seamlessly into the surrounding desert landscape.

The design features a range of structures, including:

  1. Vacation Dwellings: Impressively curving and dome-topped vacation dwellings that draw inspiration from the surrounding landscape. These structures are designed to be comfortable and sustainable, with large windows and sliding glass doors that provide stunning views of the desert.
  2. Tent Areas: Spacious tent areas that offer a unique and immersive experience for guests. These areas are designed to be flexible and adaptable, with modular components that can be easily reconfigured to suit different needs.
  1. Central Swimming Pool: A central swimming pool that serves as a social hub for the community. The pool is designed to be a focal point, with a large glass wall that provides stunning views of the surrounding landscape.
  2. Communal Facilities: Shared amenities, such as restaurants, bars, and recreational facilities, that provide a sense of community and connection among guests.
  3. Permanent Residences: Luxury residences designed by BIG, boasting up to four bedrooms and expansive views of the local scenery. These residences are available for sale, starting at a cool US$2.29 million.

Construction Process

The construction process for El Cosmico is a unique and innovative approach that combines 3D printing with traditional building methods. Icon's 3D printers are constructing the basic structures of the hotel, using the company's own Vulcan 3D-printing system.

The Vulcan 3D-printing system consists of large 3D printers overseen by human operators. The printing process involves extruding a proprietary cement-like mixture called Lavacrete out of a nozzle in layers to build up the basic structures of the residences. Once the printing process is complete, human builders will then come in to complete the buildings by adding roofs, wiring, windows, and any other required features.

Lavacrete

Lavacrete is a proprietary cement-like mixture developed by Icon specifically for 3D printing. It is a sustainable and environmentally friendly material that is made from a combination of cement, sand, and limestone. Lavacrete has a number of unique properties that make it ideal for 3D printing, including:

  1. High Strength: Lavacrete has a high compressive strength, making it suitable for load-bearing applications.

Sustainable Approach

The El Cosmico project is a testament to the potential of 3D printing in sustainable building. The project's design and construction process are optimized for minimal environmental impact, with a focus on:

  1. Locally Sourced Materials: The use of locally sourced materials, such as sand, soils, and colors of the terroir, reduces the carbon footprint of the project.
  2. Energy Efficiency: The buildings are designed to be energy efficient, with features such as large windows and insulation that minimize heat gain and loss.
  3. Water Conservation: The project incorporates water-conserving measures, such as low-flow fixtures and greywater reuse systems.

Affordable Housing

Icon is also providing affordable 3D-printed housing for the local community, as part of the El Cosmico project. This initiative is part of a larger effort to address the shortage of affordable housing in the region. The affordable housing units will be designed to be energy efficient, durable, and affordable, with prices starting at around US$50,000.

Conclusion

The El Cosmico project is a groundbreaking example of 3D printing in sustainable building. The project's innovative design, construction process, and sustainable approach make it a model for the future of building and development. As the world's first 3D-printed hotel, El Cosmico serves as a beacon for innovation and sustainability in the architecture industry, offering a unique blend of form, function, and environmental responsibility.

What is 3D printing when it comes to construction?

Here is a rewritten version with more details:

In the construction industry, 3D printing refers to the process of creating three-dimensional structures using a 3D printer, a machine that layers materials such as concrete, mortar, or other specialized materials to build a structure. This technology is also known as "additive manufacturing" or "contour crafting." The 3D printing process in construction is a game-changer, allowing for the creation of complex shapes and structures that would be difficult or impossible to build using traditional construction methods.

The 3D printing process in construction typically involves the following steps:

  1. Design: The structure is designed using computer-aided design (CAD) software or other 3D modeling tools, taking into account factors such as the type of material to be used, the desired shape and size of the structure, and any specific requirements or constraints.
  2. Printing: The 3D printer reads the design file and begins printing the structure layer by layer, using a combination of materials such as concrete, mortar, or other specialized materials. The printer can print at a rate of up to several meters per hour, depending on the size and complexity of the structure.
  1. Curing: The printed layers are allowed to cure, either naturally or with the aid of heat, light, or other curing agents. This process can take anywhere from a few hours to several days, depending on the type of material used and the environmental conditions.
  2. Finishing: The finished structure is then finished with additional materials, such as paint, insulation, or other finishes, to provide a smooth and durable surface.

The benefits of 3D printing in construction are numerous:

  1. Increased speed: 3D printing can build structures much faster than traditional construction methods, reducing the time and cost associated with labor and materials.
  2. Reduced labor costs: 3D printing can reduce the need for manual labor, as the printer does the work, freeing up workers to focus on other tasks.
  3. Improved accuracy: 3D printing can create structures with precise dimensions and shapes, reducing the risk of errors and ensuring a high level of quality.
  1. Increased complexity: 3D printing can create complex shapes and structures that would be difficult or impossible to build using traditional methods, such as curved walls or intricate details.
  2. sustainability: 3D printing can use recycled materials and reduce waste, making it a more sustainable option for construction.

Some examples of 3D printing in construction include:

  1. Building houses: 3D printing has been used to build entire houses, including walls, floors, and roofs, using a combination of concrete and other materials.
  2. Creating infrastructure: 3D printing has been used to create bridges, roads, and other infrastructure, such as drainage systems and utility tunnels.
  1. Printing concrete structures: 3D printing has been used to create concrete structures such as columns, beams, and walls, which can be used in a variety of applications, from buildings to bridges.
  2. Creating complex shapes: 3D printing has been used to create complex shapes and structures that would be difficult or impossible to build using traditional methods, such as curved walls or intricate details.

Overall, 3D printing in construction has the potential to revolutionize the way we build structures, making it faster, more efficient, and more sustainable. As the technology continues to evolve, we can expect to see even more innovative applications of 3D printing in the construction industry.

Why is the forecast of 3D printing in the construction business? How big a disruption is it?

The future of 3D printing in the construction industry is looking bright, with many experts predicting significant growth and disruption in the coming years. Here are some reasons why:

  1. Increasing Adoption: 3D printing in construction is gaining traction, with more companies and governments investing in the technology. According to a report by Grand View Research, the global 3D printing in construction market is expected to reach USD 12.4 billion by 2025, growing at a CAGR of 43.1% during the forecast period. This rapid growth is driven by the increasing demand for innovative and efficient construction methods.
  1. Cost Savings: 3D printing can reduce construction costs by up to 50% compared to traditional methods. This is because 3D printing eliminates the need for manual labor, reduces waste, and enables the creation of complex shapes and structures that would be difficult or impossible to build using traditional methods. For example, 3D printing can print entire buildings, including walls, floors, and roofs, in a single process, reducing labor costs and minimizing the risk of errors.
  1. Increased Speed: 3D printing can build structures much faster than traditional methods. This is because 3D printing can print multiple layers at once, reducing the time it takes to complete a project. For instance, a 3D printed house can be built in just a few days, compared to traditional construction methods that can take weeks or even months.
  1. Improved Quality: 3D printing can create structures with precise dimensions and shapes, reducing the risk of errors and improving the overall quality of the finished product. This is because 3D printing allows for the creation of complex shapes and structures that would be difficult or impossible to build using traditional methods.

  2. Sustainability: 3D printing can use recycled materials and reduce waste, making it a more sustainable option for construction. For example, 3D printing can use recycled plastic and concrete to create structures, reducing the environmental impact of construction.

  1. Increased Complexity: 3D printing can create complex shapes and structures that would be difficult or impossible to build using traditional methods. This allows for the creation of innovative and unique structures that can be used in a variety of applications, from residential buildings to commercial developments.
  1. government support: Governments around the world are investing in 3D printing in construction, recognizing its potential to transform the industry. For example, the Chinese government has launched a number of initiatives to promote the development of 3D printing in construction, including the creation of 3D printing hubs and the provision of funding for 3D printing projects.

The disruption caused by 3D printing in construction is significant, as it has the potential to:

  1. Disrupt Traditional Construction Methods: 3D printing can replace traditional construction methods, such as manual labor and traditional materials, with a more efficient and cost-effective alternative.
  1. Create New business models: 3D printing can enable new business models, such as print-on-demand and subscription-based services, which can change the way construction companies operate.

  2. Create New job opportunities: 3D printing can create new job opportunities in areas such as design, engineering, and printing.

  1. Change the Way We Design and Build: 3D printing can enable the creation of complex shapes and structures that would be difficult or impossible to build using traditional methods, changing the way we design and build structures.

However, there are also challenges and limitations to the adoption of 3D printing in construction, such as:

  1. Regulatory Frameworks: There is a need for regulatory frameworks to be developed to govern the use of 3D printing in construction. This includes developing standards for 3D printing materials and processes, as well as ensuring that 3D printed structures meet building codes and regulations.
  1. Standardization: There is a need for standardization of 3D printing materials and processes to ensure consistency and quality. This includes developing standards for 3D printing materials, such as concrete and plastic, as well as ensuring that 3D printing processes are consistent and reliable.

  2. Training and education: There is a need for training and education programs to be developed to equip construction professionals with the skills and knowledge needed to work with 3D printing technology. This includes training programs for designers, engineers, and construction workers, as well as education programs for students and professionals.

  1. Infrastructure: There is a need for infrastructure to be developed to support the growth of 3D printing in construction, such as printing facilities and supply chains. This includes developing printing facilities that can handle large-scale 3D printing projects, as well as establishing supply chains for 3D printing materials and equipment.

Overall, the forecast for 3D printing in construction is highly optimistic, with many experts predicting significant growth and disruption in the industry. However, there are also challenges and limitations that need to be addressed to ensure the successful adoption of 3D printing in construction.

Wired: Bobbi Althoff on Exactly How She Got Rich—and How Rich, Exactly

From Mommy TikTok to that Drake interview and beyond, the podcast host is proof you can brute-force your way to online fame—and make a shit ton of money along the way.



https://www.wired.com/story/bobbi-althoff-big-interview/

BBC: The racist AI deepfake that fooled and divided a community


https://www.bbc.com/news/articles/ckg9k5dv1zdo

BBC: Martian sand can be used to build in space, Dublin scientists find


https://www.bbc.com/news/articles/cx25evle0zeo

Reuters: Tencent, Guillemot family mulling buyout of 'Assassin's Creed' maker Ubisoft, Bloomberg News reports


https://www.reuters.com/markets/deals/tencent-guillemot-family-mulling-buyout-assassins-creed-maker-ubisoft-bloomberg-2024-10-04/

AI is a powerful force. The democratization of this is crucial. This means taking LLMs and building other models that are outside the control of Big Tech. The components of compute, algorithms and data are essential.

This needs to be tied to blockchain, or at least a Web 3.0 application. When this occurs, we are dealing with tokenization, which means distributed ownership. If designed right, this truly operates outside the realm of the traditional financial system.

#technology #ai #postlaboreconomics

Not understanding the concept of a token (not crypto) in the AI and digital world is akin to not understanding the dollar in the financial system.

YouTube apologizes for falsely banning channels for spam, canceling subscriptions

A misfire of YouTube's systems led to the accidental banning of YouTube channels affecting numerous creators who were informed their channels were removed

#youtube #newsonleo #banning #socialmedia

The situation I described earlier involved a misfire by YouTube's systems, which resulted in the accidental banning of numerous channels for "Spam & Deceptive Practices." Here's a more detailed analysis of the situation:

What happened:

On October 3, 2024, YouTube announced that some channels had been incorrectly flagged for spam and removed. The affected channels were reportedly not engaging in any spam or deceptive practices, but were simply not meeting YouTube's algorithms for relevance and engagement.

How did it happen:

The exact cause of the misfire is unknown, but it is believed to be related to a problem with YouTube's automation systems. YouTube's algorithms are designed to analyze user behavior and determine which content is most relevant and engaging. However, these algorithms can be complex and prone to errors, particularly when dealing with large amounts of user data.

In this case, it appears that YouTube's algorithms misidentified certain channels as spam or deceptive, resulting in their removal. The misfire was not limited to a single channel or type of content, but rather affected hundreds of channels across various categories.

Impact on creators:

The misfire had a significant impact on the creators who had their channels removed. Many of these creators were left stunned and frustrated, wondering how their channels had been incorrectly targeted. Some reported that their files were missing, or that they tried to change their settings or reach out to support but received no assistance.

T

What's next:

As the situation continues to unfold, it remains to be seen whether YouTube will provide any further explanation for the misfire or take steps to prevent similar issues in the future. Creators are left to wonder if their livelihoods and content have been unfairly targeted by the company's systems.

In the meantime, YouTube is working to reinstate the affected channels and restore access to subscriptions and streaming capabilities. However, the lack of transparency and communication from the company has left many creators feeling frustrated and uncertain about the future.

Potential solutions:

To prevent similar issues in the future, YouTube may consider implementing additional measures to validate and test its automation systems. This could include:

  • Increased transparency and communication with creators and fans
  • Improved testing and validation of algorithms and systems
  • Enhanced support and assistance for creators who are affected by errors or misfires
  • Additional measures to prevent similar issues from occurring in the future

Ultimately, the situation highlights the importance of ensuring that YouTube's systems are reliable, trustworthy, and fair. By taking steps to address these concerns and provide greater transparency and communication, the company can help to build trust with its creators and fans.

The misfire also affected YouTube Premium subscribers who lost access to their subscriptions and streaming capabilities due to the channel removals. This was particularly problematic for creators who rely on their YouTube Premium subscriptions to monetize their content.

YouTube's response:

YouTube initially responded to the misfire by posting on its X account, acknowledging the problem and stating that its teams were working to reinstate the affected channels. The company assured creators that it was still working on the reinstatement and that some content, such as playlists, may still be delayed.

However, as the situation continued to unfold, it became clear that the problem was not fully resolved. YouTube's updates on its Help site revealed hundreds of responses from creators and fans, but no further explanation was provided.

Lack of transparency:

One of the most concerning aspects of the situation is the lack of transparency and communication from YouTube. Despite the widespread impact of the issue, the company has not provided any clear explanation for what caused the misfire or acknowledged the scale of the problem.

This lack of transparency has led to frustration and mistrust among creators, who are wondering if YouTube's systems are reliable and trustworthy. The situation highlights the importance of transparency and communication in complex systems, particularly when dealing with sensitive and high-stakes decisions such as channel removals.

Causes and consequences:

The misfire by YouTube highlights the risks associated with relying on automation systems, particularly when it comes to sensitive and high-stakes decisions such as channel removals. The situation also underscores the importance of testing and validating these systems to ensure that they are accurate and fair.

The consequences of the misfire are far-reaching, with potential impacts on creators, YouTube Premium subscribers, and the broader YouTube community. The situation has also raised questions about the company's ability to manage its systems and ensure fairness and accuracy.

Based upon my usage of Venice.ai, I think the Leo community could easily create 1 million tokens per day when LeoAI (Chatbot) goes live.

From what I am seeing, if the utility is similar, a few hundred people could easily surpass that to start.

Can you elaborate on that, I don’t really get what you’re saying here…

A token (not crypto but the true definition) is a unit of data. When you do a prompt in a chatbot, you get a certain output (information). There is a certain amount of tokens created.

The output is resident on the companies servers (OpenAI, Google, or Leo). It is then fed back into the model for further training.

Think of the exponential impact of all that data if 100 people are prompting multiple times.

1M tokens isnt outrageous.

You can get a feel on this by putting your threads, posts, or articles into this:

https://platform.openai.com/tokenizer

By the way, my response is 122 tokens.

Ok, I am getting you there, I didn’t realise the amount of tokens made such a difference!

Yeah. Tokens denote data and it is all about data. The more that is available to the models to train on, the better. Llama3 was trained on 16 trillion so we can see a lot is needed.

I would say the next model training for Meta tops 50 trillion.

We need to feed LeoAI with more along with getting as much as we can on the blockchain. The latter makes it available to everyone.

Ok, but I guess it’s not only about volume, but also quality?

Well quality is not how you think. We think of quality in terms of posts. That is not how computers think. For example, better quality for a knowledge graph is a table of data because that is already structured in a way the computer understands.

GM is working on an eyes-off, hands-off driving system

Seven years ago, GM became the first automaker to offer hands-free driving when it rolled out its Super Cruise advanced driver assistance system.

What is Level 3 Autonomous Driving?

Level 3 autonomous driving, also known as "hands-off" or "eyes-off" driving, refers to a system that can take control of a vehicle on its own, but only in specific conditions. In this scenario, the driver is not required to be attentive, but they are still responsible for taking control of the vehicle if needed.

Level 3 autonomous driving systems use a combination of sensors, cameras, and software to monitor the road and surrounding environment. They can recognize and respond to various driving scenarios, such as traffic lights, intersections, and lane markings. However, they are still limited by their programming and may not be able to handle complex or unexpected situations.

GM's Super Cruise System

GM's Super Cruise system is a Level 2 autonomous driving system, which means it can assist with driving tasks, but the driver must remain attentive and take control of the vehicle at any moment. Super Cruise uses a combination of lidar map data, high-precision GPS, cameras, and radar sensors to monitor the road and surrounding environment.

When Super Cruise is activated, it can accelerate or brake to maintain a selected following distance from a vehicle ahead, steer to keep its lane position, and make automatic change lanes to pass slower traffic. However, the driver must remain attentive and be prepared to take control of the vehicle at any moment.

Level 3 System: Hands-Off and Eyes-Off Driving

GM's goal is to develop a Level 3 system that would allow drivers to keep their eyes off the road and hands off the wheel. This would be achieved through the use of advanced sensor technology, such as high-resolution cameras and lidar sensors, as well as sophisticated software that can recognize and respond to various driving scenarios.

The Level 3 system would be designed to operate on highways and at reduced speeds, where the environment is less complex and the risk of accidents is lower. However, it's still unclear when this technology would become publicly available and how it would be implemented in production vehicles.

Advancements and Future Plans

GM has made significant progress in expanding Super Cruise's availability across its brands, including Chevrolet and GMC. By the end of 2025, Super Cruise will be available on approximately 750,000 miles of roads in the United States and Canada.

In addition to the expansion of Super Cruise, GM has also introduced new features and technologies to enhance the driving experience. For example, the company has developed a driver attention assist system, which uses cameras and sensors to monitor the driver's attention and alertness. This system can be used in conjunction with Super Cruise to provide additional safety features and prevent distracted driving.

New Hires and Strategic Directions

The recent hiring of Anantha Kancherla, a former executive at Lyft and Meta, has raised hopes that GM is ramping up its efforts to develop a Level 3 system. Kancherla has been tasked with expanding or innovating on the foundation of Super Cruise, leveraging its software capabilities to drive progress.

Kancherla's background in autonomous driving and his experience at Lyft and Meta make him an ideal candidate to lead GM's efforts in this area. His hiring has sent a clear message that GM is committed to advancing its autonomous driving technology and is willing to invest in the necessary talent and resources to achieve its goals.

Conclusion

GM's plans for a Level 3 autonomous driving system are significant and ambitious. The company's goal is to develop a system that would allow drivers to keep their eyes off the road and hands off the wheel, while still ensuring safety and security.

While it's unclear when this technology will become publicly available, GM's recent advancements and hires suggest that the company is making significant progress in this area. As the autonomous driving industry continues to evolve, GM's Level 3 system has the potential to revolutionize the way we think about driving and safety on the road.

My Den (subscription) is going to focus upon #technology, how money works, and the impact of #ai and the digital transformation on the economy.

Everything is changing and quickly.

3D-printed Walmart extension set to revolutionize US retail

The world of 3D-printed architecture and explore the implications of Walmart's groundbreaking project.

The 3D Printing Process

The 3D printing process used in the Walmart extension involved extruding a cement-like mixture out of a nozzle in layers to build up the basic structure of the building. This process is known as stereolithography (SLA) or fused deposition modeling (FDM), depending on the specific technology used.

In the case of the Walmart extension, Alquist 3D used a customized 3D printing system that allowed them to print large structures like this one. The printing process involved the following steps:

  1. Design: The design of the building was created using computer-aided design (CAD) software.
  2. Material preparation: The cement-like mixture was prepared according to the recipe provided by the manufacturer.
  1. Printing: The mixture was extruded out of a nozzle in layers, with each layer being deposited on tOP of the previous one.
  2. Curing: After each layer was printed, the layer was cured using a combination of heat and light.
  3. Finishing: Once the printing process was complete, human builders were brought in to finish the structure by installing electrical and plumbing systems, and adding roofing materials.

The Benefits of 3D-Printed Architecture

The use of 3D printing technology in construction has several benefits, including:

  1. Increased speed: 3D printing can build structures much faster than traditional construction methods. In the case of the Walmart extension, the printing process took just 45 days.
  2. Reduced labor costs: 3D printing can reduce labor costs by automating many of the construction processes.
  1. Improved accuracy: 3D printing can improve the accuracy of construction by allowing for precise control over the printing process.
  2. Sustainability: 3D printing can be more sustainable than traditional construction methods, as it reduces waste and uses less materials.
  3. Customization: 3D printing allows for the creation of complex geometries and customized designs that would be difficult or impossible to achieve with traditional construction methods.

The Future of 3D-Printed Architecture

The use of 3D printing technology in construction is still in its early stages, but it has the potential to revolutionize the way buildings are designed and built. Some of the potential applications of 3D-printed architecture include:

  1. Housing: 3D printing could be used to create affordable and sustainable housing solutions.
  2. Commercial construction: 3D printing could be used to build large commercial structures, such as offices and shopping centers.
  1. Infrastructure: 3D printing could be used to build roads, bridges, and other infrastructure projects.
  2. Residential construction: 3D printing could be used to build custom residential structures, such as single-family homes and apartments.

Challenges and Limitations

While 3D printing technology has the potential to revolutionize the construction industry, there are still several challenges and limitations that need to be addressed, including:

  1. Scalability: 3D printing is still a relatively slow process, and it can be difficult to scale up production to large sizes.
  2. Material limitations: The materials used in 3D printing are still relatively limited, and they can be prone to degradation over time.
  1. Regulatory frameworks: There are still many regulatory frameworks that need to be developed to govern the use of 3D printing technology in construction.
  2. Cost: 3D printing can be more expensive than traditional construction methods, at least in the short term.

The Role of Walmart and Alquist 3D

Walmart and Alquist 3D are playing a crucial role in the development of 3D-printed architecture. Walmart is providing the market demand and financial resources to drive the adoption of 3D printing technology, while Alquist 3D is providing the technical expertise and infrastructure to make it happen.

The partnership between Walmart and Alquist 3D is significant, as it demonstrates the potential for 3D printing technology to be used in commercial construction projects. The success of this project could pave the way for the widespread adoption of 3D printing technology in the construction industry.

The Implications for the Construction Industry

The use of 3D printing technology in construction has the potential to disrupt the entire industry. Some of the implications for the construction industry include:

  1. Increased competition: 3D printing technology could increase competition for traditional construction companies, as new players enter the market.
  2. New business models: 3D printing technology could enable new business models, such as pay-per-use or subscription-based services.
  1. Changes in workforce: 3D printing technology could lead to changes in the workforce, as traditional construction skills become less relevant.
  2. Changes in regulatory frameworks: 3D printing technology could lead to changes in regulatory frameworks, as new standards and guidelines are developed to govern its use.

Overall, the use of 3D printing technology in construction is a significant development that has the potential to transform the industry. While there are still many challenges and limitations to be addressed, the potential benefits of 3D printing technology make it an exciting and promising area of research and development.

REVOLUTIONIZING COMMERCIAL CONSTRUCTION

In partnership with Walmart, Alquist completed the largest 3D concrete-printed (3DCP) commercial building in the United States.

The 8,000 square-foot addition to Walmart Supercenter in Athens, Tenn. boasts 19’4” high walls – among the tallest, seismic-ready, 3DCP structures in the world.

The use of 3DCP revolutionizes the construction of commercial spaces to offer more efficient and sustainable solutions when compared to traditional construction methods.

What is Alquist3D?

Alquist 3D is a groundbreaking technology developed by Alquist, a pioneering company in the field of 3D printing and additive manufacturing. Alquist 3D is a proprietary 3D printing process that combines innovative materials and techniques to create intricate, high-performance structures that were previously unimaginable.

Some of the key features and benefits of Alquist 3D include:

  1. Unparalleled Complexity: Alquist 3D can print structures with intricate geometries, internal cavities, and complex shapes that would be extremely challenging or impossible to achieve with traditional manufacturing methods. This is made possible by the technology's ability to print in multiple directions, allowing for the creation of complex structures with precision and accuracy.
  2. Exceptional Strength-to-Weight Ratio: The 3D printing process enables the creation of structures with exceptional strength-to-weight ratios, making them ideal for applications where weight reduction is critical. This is achieved by optimizing the material composition and structure of the printed parts, allowing for maximum strength while minimizing weight.
  1. Lightweight yet Robust: Alquist 3D structures can be designed to be extremely lightweight, reducing material usage and weight while maintaining or even improving structural integrity. This is particularly beneficial in industries where weight reduction is crucial, such as aerospace and automotive.
  2. Customization and Optimization: The 3D printing process enables the creation of custom structures tailored to specific applications, allowing for optimized performance and reduced waste. This is achieved by using computer-aided design (CAD) software to design and simulate the printed parts, ensuring that they meet the required specifications and performance criteria.
  1. Rapid Prototyping and Production: Alquist 3D printing is a rapid process, allowing for the creation of complex structures in a matter of hours or days, compared to traditional manufacturing methods that can take weeks or months. This enables rapid prototyping, testing, and production, reducing the time-to-market and increasing the competitiveness of companies that adopt this technology.
  2. Material Versatility: Alquist 3D can print with a wide range of materials, including metals, ceramics, and polymers, offering flexibility and adaptability for various applications. This allows companies to choose the most suitable material for their specific needs, ensuring optimal performance and durability.

Alquist 3D has numerous potential applications across various industries, including:

  1. Aerospace: Lightweight, high-performance structures for aircraft and spacecraft, such as engine components, satellite components, and spacecraft bodies.
  2. Automotive: Customized, lightweight components for vehicles, such as engine mounts, suspension components, body panels, and chassis components.
  1. Medical: Customized implants, surgical guides, and medical devices that require complex geometries and high precision, such as prosthetic limbs, surgical instruments, and medical implants.
  2. Energy: Lightweight, high-performance components for wind turbines, solar panels, and other renewable energy systems, such as turbine blades, solar panels, and energy storage systems.
  1. Construction: Customized, complex structures for building facades, bridges, and other infrastructure projects, such as building facades, bridge components, and tunnel linings.

Alquist 3D has the potential to revolutionize various industries by enabling the creation of complex, high-performance structures that were previously impossible or impractical to manufacture. By leveraging this technology, companies can reduce production costs, improve product performance, and gain a competitive edge in their respective markets.

From Alquist Website:

About ALquist

In December of 2021, Alquist became synonymous with 3D concrete printing (3DCP) by completing the first owner-occupied 3D printed home in the world. Alquist uses 3D printing technology to create exceptional designs while lowering the cost of housing and infrastructure. After 6+ years of research, and working with multiple machines and materials, Alquist continues to be a leader in the industry.

Alquist’s approach to 3DCP has been unique in our emerging industry. As 3DCP continued to emerge, Alquist took the route of becoming a construction company – as opposed to a manufacturer. Our goal is to become experts on a wide variety of printers, materials, and methods to expand our understanding and gain insightful knowledge of the best practices for the future of this industry. Alquist is the only 3DCP company that knows to work directly with local city officials, passcode and permitting, and successfully print a home using multiple printers and/or materials.

In September 2023 Alquist moved our offices to Greeley, CO after working with the City and the State of Colorado. Together with our municipal partners, and Aims Community College, to create our unique Economic Ecosystem, bringing jobs and housing back to our communities.

Alquist’s approach to 3DCP has been unique in our emerging industry. As 3DCP continued to emerge, Alquist took the route of becoming a construction company – as opposed to a manufacturer. Our goal is to become experts on a wide variety of printers, materials, and methods to expand our understanding and gain insightful knowledge of the best practices for the future of this industry. Alquist is the only 3DCP company that knows to work directly with local city officials, passcode and permitting, and successfully print a home using multiple printers and/or materials.

In September 2023 Alquist moved our offices to Greeley, CO after working with the City and the State of Colorado. Together with our municipal partners, and Aims Community College, to create our unique Economic Ecosystem, bringing jobs and housing back to our communities.

Zachary Mannheimer, Founder + Board Chair
Patrick Callahan, CEO
Mike Halas, Chief Strategy Officer
Ryan Cox, Chief Operating Officer
Chris Vaughn, Senior Operations Director
Aiman Hussein, Director of Technologies
Joe Crimmings, Creative Director
Greg Galifianakis, HR Director
Gia Galifianakis, General Manager
Krissy Adamow, Senior Programs Manager

Contact Us

Alquist 3D Headquarters
617 6th Street
Greeley, CO 80631

For Questions

sales@alquist3d.com

Amazon closes more of its cashierless convenience stores

Amazon continues to scale back efforts around its cashierless checkout technology, Just Walk Out.

Amazon's Cashierless Checkout technology, Just Walk Out, Slows Down: What's Behind the Move

Amazon, the e-commerce giant, has been gradually scaling back its efforts around its cashierless checkout technology, Just Walk Out, in recent months. The company has closed three of its Go convenience stores in New York, leaving only 17 stores across the nation. This significant reduction in the number of stores operating under the Just Walk Out technology marks a shift in Amazon's retail strategy.

The Just Walk Out technology, introduced in 2018, uses AI systems to track what customers pick up, allowing them to buy items without ever having to formally check out with a cashier. Initially, the technology was seen as a game-changer in the retail industry, but since 2023, Amazon has cut the number of Go storefronts it operates nearly in half.

In addition to closing stores, Amazon has also pulled the Just Walk Out technology from its grocery stores earlier this year. The company cited a focus on smaller grab-and-go shops as the reason for this decision. However, it's worth noting that Amazon still licenses the Just Walk Out technology to third-party convenience stores, and the company remains committed to the Go format and technology as a whole.

The decision to scale back on Just Walk Out technology comes as Amazon focuses on other areas of its business. The company has been investing heavily in its grocery delivery service, Amazon Fresh, and has also been expanding its physical retail presence through the acquisition of Whole foods market. This strategic move allows Amazon to allocate resources to other initiatives that align with its long-term goals.

Despite the scaling back of Just Walk Out technology, Amazon's commitment to cashierless checkout remains strong. The company has been testing the technology in various formats, including its Amazon Go convenience stores and its Amazon Fresh grocery stores. The technology has been well-received by customers, who appreciate the convenience and speed of checkout.

Amazon's decision to scale back on Just Walk Out technology is a strategic move to focus on other areas of its business. While the technology has been successful, the company is choosing to prioritize other initiatives. However, Amazon's commitment to cashierless checkout remains strong, and the company will likely continue to innovate and improve this technology in the future.

Key Takeaways:

  • Amazon has closed three of its Go convenience stores in New York, leaving only 17 stores across the nation.
  • The company has cut the number of Go storefronts it operates nearly in half since 2023.
  • Amazon has pulled the Just Walk Out technology from its grocery stores, citing a focus on smaller grab-and-go shops.
  • The company still licenses the Just Walk Out technology to third-party convenience stores and remains committed to the Go format and technology as a whole.
  • Amazon's decision to scale back on Just Walk Out technology is a strategic move to focus on other areas of its business, such as grocery delivery and physical retail expansion.

For the circuit shown, Kirchhoff’s Laws tells you the following:

KCL: I = I1 + I2

KVL: Vbattery - VR - VLED = 0, or Vbattery = VR + VLED

A capacitor charges and discharges almost completely after five times its RC time constant, or 5RC. After the equivalent of one time constant has passed, a discharged capacitor will charge to roughly two-thirds its capacity, and a charged capacitor will discharge nearly two-thirds of the way.

Technology terms:

  • Application programming interface (API): API is software that allows computers or applications to communicate with each other.

  • Application: An application is software that completes a task, whether for a user or another application, based on a given set of rules.

  • Attribute: An attribute is a specification that provides extra information about an element.

  • Back end: The back end is the server side of an application that provides functionality and is inaccessible to users.

  • Bug: A bug is a coding error that interferes with a website's or an application’s normal operation.

  • DevOps: DevOps is a series of practices and tools that blend software development and IT operations to improve time-to-market for software.

  • Framework: A framework is a program, code library or another component for developing software applications.

  • Front end: The front end is the user side of an application that displays information and allows users to interact with the program.

  • Library: A library is a collection of pre-written code that programmers often use to save time during development.

  • Object-oriented programming (OOP): OOP is a programming model that categorizes software design by data (objects) instead of procedures.

  • Software: Software is a computer's operating instructions that tell it how to function.

  • Text editor: A text editor is a program for writing plain text for coding.

  • Version control: Version control is a program that tracks code or file changes to a website or app, allowing users to restore earlier versions.

  • Web server: A web server is a computer that stores and delivers web pages.

3D Printing Concrete Directly From Mixer Truck

Is on site batch plant mixing obsolete? I recently spent 3 weeks in Calgary Canada commuting daily into the indigenous territory of the Siksika nation

#3dprinting #technology #construction

Is On-Site Batch Plant Mixing Obsolete? A Revolutionary Shift in construction Automation

The construction industry is on the cusp of a revolution, driven by innovative technologies and companies willing to challenge the status quo. One such company, Nidus 3D, has been pushing the boundaries of construction automation in the region of Calgary, Canada. In this article, we'll delve into the evolution of on-site batch plant mixing and the potential obsolescence of this traditional method.

The Evolution of On-Site Batch Plant Mixing

On-site batch plant mixing has been the standard practice in the construction industry for decades. However, this method has its limitations. In extreme climates, such as the one found in Calgary, Canada, the batch plant mixer can struggle to operate efficiently, resulting in issues with the generator and pump. These problems can force the team to halt printing operations for the day, leading to costly delays and lost productivity.

The Rise of Off-Site Batch Mixing

Nidus 3D has been at the forefront of this revolution, collaborating with Lafarge to develop a new solution. Instead of hydrating the dry mix on site, they batch mix the custom concrete at the Lafarge concrete plant an hour away and deliver it to the site in a regular mixing truck. The material is then deposited into a pump with sufficient precision to feed the printer.

This approach has the potential to dramatically increase the availability of printable material and reduce labor on site. By outsourcing material mixing and hydration, larger concrete companies can handle one of the dirtiest parts of the job, reducing airborne silica dust and creating safer working conditions.

The Primary Challenge: Truck time on Site

The primary challenge with this strategy is the length of time the truck is required on site. Typically, these trucks dump their load as quickly as possible to move on to the next job site and maximize their earnings. This can result in significant hourly charges, making it costly to keep the truck on site for an entire print day.

Preserving the Benefits: Material Silos

One potential solution to navigate the hourly concrete truck charges is the use of material silos that can hold the wet mix in a printable state through agitation or chemical means. This would allow the truck to return to the plant to refill, rather than waiting on site. Material silos would enable the truck to return to the plant, reducing the need for extended on-site waiting times and minimizing the financial burden.

Conclusion

Nidus 3D's innovative approach to off-site batch mixing is a testament to the company's commitment to driving construction automation forward. While on-site batch plant mixing has its challenges, the benefits of off-site mixing, including reduced labor and safer working conditions, make it an attractive alternative.

As the construction industry continues to evolve, it's likely that we'll see more companies adopting similar solutions. The future of technology will not come from copying the status quo, but from those willing to try new things that have never been achieved before.

The World's Largest 3D-Printed Building: A Luxury Horse Barn in Florida

The construction industry is undergoing a significant transformation with the rapid advancement of 3D-printed architecture. In the United States, projects such as the 100-house development in Austin, Texas, and NASA's Mars habitat are pioneering the use of this technology. The latest milestone in this journey is the world's largest 3D-printed building, a luxury horse barn in Florida, developed by Printed farms.

The Project: A Luxury Horse Barn

The horse barn, measuring approximately 10,100 square feet (almost 940 square meters), is a single-story structure with a total length of 155 feet (47 meters), a width of 83 feet (25 meters), and a height of 13 feet (4 meters). This impressive structure was built using a single COBOD BOD2 3D printer, the same model used to create Europe's first two-story 3D-printed home. The printer extruded a cement-like mixture out of a nozzle, creating the walls in layers.

The Build Process: A Complex and Efficient Process

The build process involved printing the structure in multiple sections, starting with the sides of the building and finishing with the middle. The COBOD BOD2 3D printer was not large enough to build the entire structure in one go, so it was necessary to print in sections. The entire process took 18 days to complete 3,200 square feet (297 square meters) of the building. This complex and efficient process demonstrates the capabilities of 3D printing technology in large-scale construction projects.

Finishing Touches: Human Builders Take Over

The 3D printing process is nOW complete, and human builders are working on finishing the building. They will install the roof, windows, doors, electrical fittings, and any other necessary features. The building was designed with local weather conditions in mind, including hurricanes and tropical storms, and is expected to withstand extreme weather conditions.

Innovative Features: A Sustainable and Resilient Structure

The 3D-printed walls of the horse barn create a cavity and air gap, providing natural cooling to the building. This innovative feature demonstrates the versatility and benefits of 3D-printing technology. The structure's design also incorporates local weather conditions, making it an exemplary example of how 3D-printed architecture can be used to create sustainable and resilient buildings.

Conclusion: A Significant Milestone in 3D-Printed Architecture

Printed Farms' luxury horse barn is a significant milestone in the development of 3D-printed architecture. The project showcases the potential of this technology to create large-scale structures that are not only innovative but also sustainable and resilient. As the construction industry continues to evolve, it is likely that we will see more projects like this, pushing the boundaries of what is possible with 3D printing. The world's largest 3D-printed building is a testament to the rapid progress being made in this field and the exciting possibilities that lie ahead.

#aiimage created using venice.ai.

What’s in the rug? How TikTok got swept into a real-time true crime story

A woman in Ohio is being haunted by ghosts. Or maybe she’s not. There’s a dead body buried underneath her house, rolled up inside a rug.

#technology #newsonleo #tiktok

Let's break down the article and the case of Katie Santry's mysterious rug in more detail.

The Discovery of the Rug

Katie Santry, a mother and content creator on TikTok, was building a fence in her yard when she stumbled upon a mysterious rug buried beneath the surface. The discovery was made in late 2022, and it quickly went viral on TikTok, with millions of people watching as Santry uncovered the rug and sharing her story with her followers.

Theories and Speculation

As the investigation progressed, Santry's followers began to speculate about the possible origins of the rug. Some theories suggested that the rug might have been used to conceal evidence of a crime, such as a murder. Others speculated that the rug might have been buried on the property for unknown reasons, perhaps as a prank or a mistake.
.

The Investigation

Homicide detectives were called to the scene, along with two cadaver dogs to search for any signs of human remains. However, the dogs' behavior suggested that they had picked up on something, but ultimately, no body was found. The rug itself remained a mystery, with no visible signs of tampering or foul play.

The Police's Next Move

After the initial investigation, the Police took the rug in for testing, which is standard procedure in cases where a suspicious item is found. the Police have not released any further information about the testing or what they hope to learn from it.

The True Crime Obsession

The case of Katie Santry's mysterious rug has highlighted the true crime obsession that has taken hold of the Internet. True crime fans have been fascinated by the case, with many viewers taking to social media to share their theories and speculate about the possible origins of the rug.

The Ethics of Amateur Sleuthing

The case has also raised important questions about the ethics of amateur sleuthing. While it can be exciting to tRY to solve real-life mysteries, it's essential to remember that some things are best left to trained professionals. The internet sleuthing community has been criticized for its lack of expertise and its tendency to sensationalize cases, which can lead to misinformation and harm to those involved.

The Psychological Appeal of True Crime Stories

True crime stories have a unique psychological appeal that draws people in. They tap into our deep-seated fascination with the darker aspects of human nature, and they offer a sense of control and understanding in a chaotic world.

However, this fascination also raises important questions about the human psyche. Why do people find true crime stories so compelling? What drives our desire to learn more about the darker aspects of human nature?

The Case of Katie Santry's Mysterious Rug

The case of Katie Santry's mysterious rug is a microcosm of the true crime obsession that has taken hold of the internet. It highlights the dangers of amateur sleuthing and the importance of leaving some things to trained professionals.

At the same time, it showcases the power of social media to bring people together and to create a sense of community around shared interests. The case has sparked a global conversation about true crime, with millions of people following along on social media and sharing their thoughts and theories.

Ultimately, the case of Katie Santry's mysterious rug is a complex and multifaceted one that raises more questions than it answers. However, it's an important reminder of the importance of responsible and informed online behavior, and the need to separate fact from fiction in our online discussions

The Case Against Reality

A professor of cognitive science argues that the world is nothing like the one we experience through our senses.

#technology #science #reality

As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”

On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them. Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”

So while neuroscientists struggle to understand how there can be such a thing as a first-person reality, quantum physicists have to grapple with the mystery of how there can be anything but a first-person reality. In short, all roads lead back to the observer. And that’s where you can find Hoffman—straddling the boundaries, attempting a mathematical model of the observer, trying to get at the reality behind the illusion. Quanta Magazine caught up with him to find out more.

Gefter: People often use Darwinian evolution as an argument that our perceptions accurately reflect reality. They say, “Obviously we must be latching onto reality in some way because otherwise we would have been wiped out a long time ago. If I think I’m seeing a palm tree but it’s really a tiger, I’m in trouble.”

Hoffman: Right. The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible.

But I think it is utterly false. It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions—mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction. The mathematical physicist Chetan Prakash proved a theorem that I devised that says: According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness. Never.

Gefter: You’ve done computer simulations to show this. Can you give an example?

Hoffman: Suppose in reality there’s a resource, like water, and you can quantify how much of it there is in an objective order—very little water, medium amount of water, a lot of water. Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness—in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality.

Generically, in the real world, that will never be the case. Something much more natural is a bell curve—say, too little water you die of thirst, but too much water you drown, and only somewhere in between is good for survival. Now the fitness function doesn’t match the structure in the real world. And that’s enough to send truth to extinction. For example, an organism tuned to fitness might see small and large quantities of some resource as, say, red, to indicate low fitness, whereas they might see intermediate quantities as green, to indicate high fitness. Its perceptions will be tuned to fitness, but not to truth. It won’t see any distinction between small and large—it only sees red—even though such a distinction exists in reality.

Gefter: But how can seeing a false reality be beneficial to an organism’s survival?

Hoffman: There’s a metaphor that’s only been available to us in the past 30 or 40 years, and that’s the desktop interface. Suppose there’s a blue rectangular icon on the lower right corner of your computer’s desktop — does that mean that the file itself is blue and rectangular and lives in the lower right corner of your computer? Of course not. But those are the only things that can be asserted about anything on the desktop — it has color, position, and shape. Those are the only categories available to you, and yet none of them are true about the file itself or anything in the computer.

They couldn’t possibly be true. That’s an interesting thing. You could not form a true description of the innards of the computer if your entire view of reality was confined to the desktop. And yet the desktop is useful. That blue rectangular icon guides my behavior, and it hides a complex reality that I don’t need to know. That’s the key idea. Evolution has shaped us with perceptions that allow us to survive. They guide adaptive behaviors. But part of that involves hiding from us the stuff we don’t need to know. And that’s pretty much all of reality, whatever reality might be. If you had to spend all that time figuring it out, the tiger would eat you.

Gefter: So everything we see is one big illusion?

Hoffman: We’ve been shaped to have perceptions that keep us alive, so we have to take them seriously. If I see something that I think of as a snake, I don’t pick it up. If I see a train, I don’t step in front of it. I’ve evolved these symbols to keep me alive, so I have to take them seriously. But it’s a logical flaw to think that if we have to take it seriously, we also have to take it literally.

Gefter: If snakes aren’t snakes and trains aren’t trains, what are they?

Hoffman: Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.

Gefter: How did you first become interested in these ideas?

Hoffman: As a teenager, I was very interested in the question “Are we machines?” My reading of the science suggested that we are. But my dad was a minister, and at church they were saying we’re not. So I decided I needed to figure it out for myself. It’s sort of an important personal question—if I’m a machine, I would like to find that out! And if I’m not, I’d like to know, what is that special magic beyond the machine? So eventually in the 1980s I went to the artificial-intelligence lab at MIT and worked on machine perception. The field of vision research was enjoying a newfound success in developing mathematical models for specific visual abilities.

I noticed that they seemed to share a common mathematical structure, so I thought it might be possible to write down a formal structure for observation that encompassed all of them, perhaps all possible modes of observation. I was inspired in part by Alan Turing. When he invented the Turing machine, he was trying to come up with a notion of computation, and instead of putting bells and whistles on it, he said, Let’s get the simplest, most pared down mathematical description that could possibly work. And that simple formalism is the foundation for the science of computation. So I wondered, could I provide a similarly simple formal foundation for the science of observation?

Gefter: A mathematical model of consciousness.

Hoffman: That’s right. My intuition was, there are conscious experiences. I have pains, tastes, smells, all my sensory experiences, moods, emotions and so forth. So I’m just going to say: One part of this consciousness structure is a set of all possible experiences. When I’m having an experience, based on that experience I may want to change what I’m doing. So I need to have a collection of possible actions I can take and a decision strategy that, given my experiences, allows me to change how I’m acting. That’s the basic idea of the whole thing.

I have a space X of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.

Gefter: But if there’s a W, are you saying there is an external world?

Hoffman: Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

Gefter: The world is just other conscious agents?

Hoffman: I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses.

Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.

Gefter: If it’s conscious agents all the way down, all first-person points of view, what happens to science? Science has always been a third-person description of the world.

Hoffman: The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches.

The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.

Gefter: It doesn’t seem like many people in neuroscience or philosophy of mind are thinking about fundamental physics. Do you think that’s been a stumbling block for those trying to understand consciousness?

Hoffman: I think it has been. Not only are they ignoring the progress in fundamental physics, they are often explicit about it. They’ll say openly that quantum physics is not relevant to the aspects of brain function that are causally involved in consciousness. They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers—spiking rates, connection strengths at synapses, perhaps dynamical properties as well. These are all very classical notions under Newtonian physics, where time is absolute and objects exist absolutely.

And then [neuroscientists] are mystified as to why they don’t make progress. They don’t avail themselves of the incredible insights and breakthroughs that physics has made. Those insights are out there for us to use, and yet my field says, “We’ll stick with Newton, thank you. We’ll stay 300 years behind in our physics.”

Gefter: I suspect they’re reacting to things like Roger Penrose and Stuart Hameroff’s model, where you still have a physical brain, it’s still sitting in space, but supposedly it’s performing some quantum feat. In contrast, you’re saying, “Look, quantum mechanics is telling us that we have to question the very notions of ‘physical things’ sitting in ‘space.’”

of reality.

Hoffman: I think that’s absolutely true. The neuroscientists are saying, “We don’t need to invoke those kind of quantum processes, we don’t need quantum wave functions collapsing inside neurons, we can just use classical physics to describe processes in the brain.” I’m emphasizing the larger lesson of quantum mechanics: Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic.

It’s that there’s no brain! Quantum mechanics says that classical objects—including brains—don’t exist. So this is a far more radical claim about the nature of reality and does not involve the brain pulling off some tricky quantum computation. So even Penrose hasn’t taken it far enough. But most of us, you know, we’re born realists. We’re born physicalists. This is a really, really hard one to let go of.

Gefter: To return to the question you started with as a teenager, are we machines?

Hoffman: The formal theory of conscious agents I’ve been developing is computationally universal—in that sense, it’s a machine theory. And it’s because the theory is computationally universal that I can get all of cognitive science and neural networks back out of it. Nevertheless, for now I don’t think we are machines—in part because I distinguish between the mathematical representation and the thing being represented. As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature

The space suit Chinese astronauts will wear on the Moon

China is planning to put people on the Moon around 2030 and is showing off the new space suits they'll be wearing. At a media event, the China Manned Space Agency (CMSA) had two astronauts put the suits through their paces for the cameras.

According to current plans, the CMSA intends to land on the Moo

#newsonleo #china #space #astornaut

China Unveils Revolutionary New Lunar space Suit for moon Mission

In a major breakthrough, the China Manned Space Agency (CMSA) has unveiled a cutting-edge lunar space suit designed to protect astronauts on the Moon's surface. The new suit was showcased at the third Spacesuit technology Forum at the China Astronaut Research and Training Center in Chongqing, China, marking a significant milestone in China's space program.

A New Era in Lunar Space Suits

The new suit is a groundbreaking development in lunar space suits, as it has been over 40 years since the last lunar space suit was used during the Apollo era. The suit is designed to shield astronauts from the harsh conditions on the Moon's surface, including the vacuum, extreme temperatures, and abrasive lunar dust. The suit must also provide a reliable life support system, allowing astronauts to breathe, eat, and communicate while on the Moon.

Design and Features

The new suit boasts a distinctive design, with red stripes on the aRMs and legs inspired by traditional Chinese art and rocket flames. The suit features a multifunctional integrated control panel, flexible and reliable gloves, cameras, and a panoramic glare-proof helmet visor. The suit's mobility was demonstrated by astronauts Zhai Zhigang and Wang Yaping, who walked, bent, and climbed a ladder to showcase its flexibility.

Technical Specifications

While specific technical details about the suit were not disclosed, it is likely that the life support system was left out to reduce weight under Earth's gravity. The suit is likely based on the Chinese Feitian space suit, which was inspired by the Soviet/Russian Orlan space suit. The new suit may weigh around 120 kg (260 lb) when fully equipped, have a duration of eight hours on its internal systems, and be good for at least 15 uses.

Public Naming Competition

The new suit will be given a name after a public naming competition. The CMSA is calling on the public to submit suggestions for the suit's name, which will be chosen from the submissions.

Conclusion

The new lunar space suit is a significant milestone in China's space program, demonstrating the country's commitment to lunar exploration and its ability to design and develop advanced space suits. The suit's design and features are a testament to the country's expertise in space technology, and its unveiling marks an exciting new chapter in China's space program. With this revolutionary new suit, China is poised to make significant strides in lunar exploration and pave the way for future human missions to the Moon.

How Meta Movie Gen could usher in a new AI-enabled era for content creators

Today, we’re excited to premiere Meta Movie Gen, our breakthrough generative AI research for media, which includes modalities like image, video, and audio.

#ai #technology #meta #video #agi

Meta Movie Gen: A Breakthrough in AI-Powered Media Creation

Meta has unveiled its latest advancement in generative AI research: Meta Movie Gen, a comprehensive suite of AI models designed to revolutionize media creation. This groundbreaking technology represents Meta's third wave of generative AI development, building upon their previous work with Make-A-Scene and Llama Image foundation models.

Key Capabilities

1. Video Generation

At the heart of Movie Gen is a sophisticated 30B parameter transformer model capable of:

  • Creating high-quality, high-definition videos up to 16 seconds long
  • Operating at 16 frames per second
  • Reasoning about object motion, subject-object interactions, and camera movements
  • Learning plausible motions for diverse concepts

2. Personalized Video Generation

The system can:

  • Take a person's image and combine it with text prompts
  • Generate videos featuring the reference person while maintaining identity and motion
  • Incorporate rich visual details based on text instructions

3. Precise Video Editing

Movie Gen's editing capabilities include:

  • Taking both video and text prompts as input
  • Performing localized edits (adding, removing, or replacing elements)
  • Implementing global changes such as background or style modifications
  • Preserving original content while targeting specific pixels

4. Audio Generation

A dedicated 13B parameter audio model offers:

  • High-quality, high-fidelity audio generation up to 45 seconds
  • Ambient sound, sound effects, and instrumental background music
  • Synchronization with video content
  • Audio extension for videos of any length

Technical Achievements

The development of Movie Gen required significant advancements in multiple areas:

  • Architecture innovations
  • Training objectives optimization
  • Data recipe refinement
  • Evaluation protocol development
  • Inference optimizations

Human evaluations have shown that Movie Gen outperforms competing industry models across all four of its core capabilities.

Limitations and Future Development

While promising, the current models do have limitations:

  • Inference time optimization is still needed
  • Quality improvements could be achieved through further scaling

Impact and Applications

Meta envisions Movie Gen enabling various creative applications:

  • Animating and editing "day in the life" videos for social media
  • Creating customized animated greetings
  • Empowering aspiring filmmakers and content creators

Ethical Considerations

Meta emphasizes that:

  • This technology is not meant to replace artists and animators
  • The goal is to democratize creative tools
  • The models were trained on licensed and publicly available datasets

Looking Ahead

Meta plans to:

  • Work closely with filmmakers and creators for feedback
  • Focus on creating tools that enhance inherent creativity
  • Potentially release the technology for public use in the future

This research represents a significant step forward in democratizing advanced media creation tools, potentially allowing anyone to bring their artistic visions to life through high-definition videos and audio.

\

Who was William Deming?

William Edwards Deming (1900-1993) was an American statistician, professor, author, and management consultant who made significant contributions to the fields of quality management and process improvement. He is best known for his work in helping Japanese industries rebuild after World ⇪ II× and for developing the "14 Points for Management," which revolutionized quality control and management practices.

Key aspects of Deming's legacy include:

  1. Statistical Process Control (SPC): He advocated for using statistical methods to identify and reduce variation in manufacturing processes.

  2. The PDCA Cycle (Plan-Do-Check-Act): Also known as the Deming Cycle, this iterative approach to problem-solving and continuous improvement remains widely used today.

  1. System of Profound Knowledge: His management philosophy that emphasizes understanding variation, psychology, systems thinking, and the theory of knowledge.

  2. Quality improvement: Deming stressed that quality should be built into the product from the beginning, rather than inspecting for defects after production.

While initially overlooked in the United States, his methods gained widespread recognition in Japan, where he is credited with contributing significantly to Japan's post-war economic miracle. Japanese manufacturers adopted his principles, leading to dramatic improvements in product quality and manufacturing efficiency. The prestigious Deming Prize was established in Japan in 1951 to recognize companies that excel in quality management.

Deming's ideas eventually gained traction in the United States in the 1980s when American companies, facing competition from high-quality Japanese products, began to adopt his methods. His philosophy emphasized:

  • Focusing on long-term thinking over short-term profits
  • Breaking down barriers between departments
  • Continuous improvement of processes
  • employee empowerment and education
  • Leadership over supervision

His most famous book, "Out of the Crisis" (1982), outlined his management principles and critiqued American management practices. His work continues to influence modern management theory, lean manufacturing, Six Sigma, and other quality improvement methodologies.

Robot-to-drone food deliveries take off in Dallas

Though not exactly commonplace quite yet, rolling robots have been delivering goods to local customers for a while now.

Though not exactly commonplace quite yet, rolling robots have been delivering goods to local customers for a while now. At the same time, packages have been flying overhead by drone. Now Serve Robotics and Wing Aviation are ready to merge the two.

#technology #newsonleo #drone #fooddelivery

Wing and Serve Robotics Join Forces to Revolutionize Last-Mile Delivery

In a groundbreaking development for autonomous delivery technology, Wing, an Alphabet-owned drone delivery company, is partnering with sidewalk robot operator Serve Robotics to create a seamless multi-modal delivery system. This collaboration aims to overcome the limitations of each individual technology while expanding the reach and efficiency of autonomous deliveries.

Background of the Partners

Wing

  • Owned by Google's parent company, Alphabet
  • Operating since at least 2018 (longer including incubation under Alphabet's Project X)
  • Uses "highly automated" battery-electric drones
  • Has delivered various items including:
  • Serves hundreds of thousands of customers across multiple countries:
    • Australia
    • United States
    • Finland
    • Ireland

Serve Robotics

  • Backed by Uber
  • Though newer to the market, has completed tens of thousands of deliveries
  • Key enterprise partners include:
    • Uber Eats
    • 7-Eleven

The Pilot Project

How It Works

  1. Serve's robot collects food items from restaurants
  2. The robot travels to an Autoloader within a few blocks
  3. Wing's drone picks up the package from the Autoloader
  4. The hybrid multi-rotor/fixed-wing aircraft delivers to customers up to 6 miles away

Benefits

  1. Expanded delivery radius
    • From 2 miles (Serve robots alone) to 6 miles (with Wing drones)
    • Enables 30-minute autonomous delivery across an entire city
  2. No additional infrastructure required for businesses
  3. Potential cost reduction for:
    • Service operators
    • Customers

Industry Perspectives

Dr. Ali Kashani, CEO and co-founder of Serve, emphasized the expanded market potential: "We're excited to partner with Wing to offer a multi-modal delivery experience that expands our market from roughly half of aLL food deliveries that are within 2 miles of a restaurant, to offering 30-minute autonomous delivery across an entire city."

Wing's CEO, Adam Woodworth, highlighted the strategic benefits: "Through this pilot partnership, Wing hopes to reach more merchants in highly-congested areas while supporting Serve as it works to expand its delivery radius."

Looking Ahead

While the duration of the pilot program remains undisclosed, this partnership represents a significant step forward in autonomous delivery technology. By combining the strengths of sidewalk robots and aerial drones, Wing and Serve Robotics are pioneering a new approach to solving the challenges of last-mile delivery in urban environments.

The History of Drone Technology

Early Beginnings (Pre-1900s)

  • The concept of unmanned aerial vehicles dates back to 1849 when Austria attacked Venice using unmanned balloons loaded with explosives
  • These early "drones" were simple, uncontrolled aircraft

military Origins (1900s-1950s)

  • World War I: First pilotless aircraft developed
  • 1930s: British Royal Navy developed the Queen Bee, a radio-controlled target drone
  • World War II:
    • Nazi Germany created the V-1 "buzz bomb"
    • U.S. developed assault drones and aerial torpedoes

Cold War Era Advancement (1960s-1990s)

  • Vietnam War saw increased use of reconnaissance drones
  • Israel pioneered modern drone warfare in the 1973 Yom Kippur War
  • Developments in electronics and GPS technology improved drone capabilities
  • Key models:
    • Ryan Firebee
    • D-21 Tagboard
    • Pioneer UAV

Commercial Revolution (2000s-Present)

  • 2006: FAA issued first commercial drone permit
  • 2010-2015: Consumer drones became widely available
    • DJI founded in 2006, became market leader
    • Parrot AR.Drone introduced as first smartphone-controlled drone
  • Key developments:
    • Miniaturization of components
    • Improved battery technology
    • Advanced flight control systems
    • Better cameras and sensors

Current applications

  1. Military

    • Reconnaissance
    • Combat operations
    • Training
  2. Commercial

  1. Consumer
    • Recreation
    • Racing
    • Videography

Future Trends

  • Autonomous capabilities: Enhanced AI and machine learning
  • Swarm technology: Multiple drones working together
  • Urban air mobility: passenger drones and flying taxis
  • Improved regulations: As technology advances, laws adapt

Impact on Society

  • Privacy concerns: Surveillance and data collection
  • Safety issues: Air Traffic and accident risks
  • Economic opportunities: New jobs and industries
  • Environmental considerations: Both positive and negative effects

Notable Milestones

  • 1935: First mass-produced drone (Queen Bee)
  • 1982: Israeli Air Force uses drones to defeat Syrian Air Force
  • 2002: CIA uses armed drone in Afghanistan
  • 2013: Amazon announces drone delivery plans
  • 2016: First drone delivery service in Rwanda

This history of drone technology covers the major developments from early concepts to modern applications. Is there any specific aspect you'd like me to expand on? For instance, we could delve deeper into military applications, commercial uses, or future trends.

There are 2 pages
Pages