You are viewing a single comment's thread from:

RE: LeoThread 2024-10-29 05:12

in LeoFinance3 months ago

Here is the daily technology #threadcast for 10/29/24. The goal is to make this a technology "reddit".

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  
There are 3 pages
Pages

OpenAIs New Statement On GPT-5 Is Surprising!

#technology #newsonleo #ai

Summary below ⏬

OpenAI's Future and AGI Timeline: Industry Leaders' Predictions

Recent statements from OpenAI's leadership and other industry experts suggest that artificial intelligence development is progressing at a pace that exceeds public understanding, with transformative advances potentially just years away. This emerging narrative, supported by multiple industry leaders, points to a rapidly approaching future where artificial general intelligence (AGI) may become a reality.

OpenAI's Internal Research Meetings

According to OpenAI's Chief Financial Officer Sarah Frier, who joined the company in June 2024, the company holds weekly research meetings that consistently "blow her mind" with previews of upcoming capabilities. This rare glimpse into OpenAI's internal operations suggests that the company maintains a significant technological lead over its competitors, despite the recent achievements of other AI labs.

This revelation gains credibility when examining OpenAI's track record of developing technologies well before their public release. A notable example is GPT-4, which was completed in August 2022, months before the release of ChatGPT. This means OpenAI had already developed its next-generation model while the public was still marveling at GPT-3.5's capabilities. GPT-4 subsequently maintained its position at the technological frontier for approximately two years before other labs approached similar capabilities.

Industry Leaders' AGI Predictions

Several prominent figures in the AI field have made bold predictions about the timeline for achieving artificial general intelligence:

Sam Altman's Vision

In his September 2024 blog post "The Intelligence Age," OpenAI CEO Sam Altman suggested that superintelligence might be possible "in a few thousand days." While this timeline could span from three to ten years, it represents a remarkably short horizon for such a transformative development. Altman describes this potential achievement as "the most consequential fact about all of history so far."

Ray Kurzweil's "Conservative" Estimate

Ray Kurzweil, known for his 86% accuracy rate in technological predictions and his pioneering work in pattern recognition technology, suggests that AGI could arrive by 2029. Notably, this five-year timeline is now considered "conservative" by industry standards, highlighting how rapidly expectations have accelerated.

Anthropic's Perspective

The CEO of Anthropic, creator of the Claude AI assistant, points to a "smooth exponential" growth in AI capabilities. He predicts that by 2025-2027, with increased funding and continued improvements in algorithms and chip technology, we could see models that surpass human capabilities in most areas. This development would be facilitated by models costing between $10-100 billion to train, a significant increase from current costs.

The Path to Superintelligence

Industry leaders identify several key factors driving this rapid progress:

  1. Self-Improving Systems: Future AI systems will contribute to developing better next-generation systems, creating a positive feedback loop of technological advancement.

  2. Scientific Progress: AI is expected to accelerate scientific discovery across multiple fields, including biology, physics, chemistry, and mathematics.

  3. Exponential Growth: The combination of improved hardware, algorithms, and increased funding is creating conditions for exponential advances in AI capabilities.

Defining Future AI Capabilities

Dario Amodei outlines a vision of future AI systems that would:

  • Match or exceed Nobel Prize winner-level expertise across multiple fields
  • Solve previously unsolved mathematical theorems
  • Create high-quality creative works
  • Develop complex software from scratch
  • Interface seamlessly with various human tools and communications systems

Conclusion

While some may dismiss these predictions as marketing hype, OpenAI's track record of delivering breakthrough technologies suggests their internal assessments deserve serious consideration. The convergence of predictions from multiple industry leaders, each with significant expertise and access to cutting-edge developments, indicates that transformative AI capabilities may be closer than many realize. The next few years could prove crucial in determining how this technology develops and its impact on society.

As we've seen with previous releases like GPT-4, Sora, and GPT-4's advanced voice capabilities, OpenAI has consistently demonstrated its ability to surprise and exceed expectations. The indication that even more impressive developments are in the pipeline suggests we may be on the cusp of even more significant technological breakthroughs in artificial intelligence.

Apple just announced a new Mac Mini. For those unaware, the ‘mini’ is Apple’s most affordable desktop computer. The 2024 refresh features a brand-new design and a new, more powerful M4 silicon under the hood.

More details in the link in the reply thread

#technology #innovation #mac #apple #freecompliments

Robot

MIT breakthrough could transform robot training

MIT researchers have developed a robot training method that reduces time and cost while improving adaptability to new tasks and environments.

The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This method marks a significant departure from traditional robot training, where engineers typically collect specific data for individual robots and tasks in controlled environments.

#newsonleo #technology #robot

Lead researcher Lirui Wang – an electrical engineering and computer science graduate student at MIT – believes that while many cite insufficient training data as a key challenge in robotics, a bigger issue lies in the vast array of different domains, modalities, and robot hardware. Their work demonstrates how to effectively combine and utilise all these diverse elements.

The research team developed an architecture that unifies various data types, including camera images, language instructions, and depth maps. HPT utilises a transformer model, similar to those powering advanced language models, to process visual and proprioceptive inputs.

In practical tests, the system demonstrated remarkable results—outperforming traditional training methods by more than 20 per cent in both simulated and real-world scenarios. This improvement held true even when robots encountered tasks significantly different from their training data.

The researchers assembled an impressive dataset for pretraining, comprising 52 datasets with over 200,000 robot trajectories across four categories. This approach allows robots to learn from a wealth of experiences, including human demonstrations and simulations.

One of the system’s key innovations lies in its handling of proprioception (the robot’s awareness of its position and movement.) The team designed the architecture to place equal importance on proprioception and vision, enabling more sophisticated dexterous motions.

Looking ahead, the team aims to enhance HPT’s capabilities to process unlabelled data, similar to advanced language models. Their ultimate vision involves creating a universal robot brain that could be downloaded and used for any robot without additional training.

While acknowledging they are in the early stages, the team remains optimistic that scaling could lead to breakthrough developments in robotic policies, similar to the advances seen in large language models.

AI News: Copilot Agent Builder, IBM Granite 3.0, NEW Claude Sonnet, Open-Source Text To Video!

Summary below ⏬

#newsonleo #technology #ai

Major AI Industry Developments Bring Wave of Innovation Across Tech Sector

In a remarkable wave of announcements, several major technology companies have unveiled significant advances in artificial intelligence, marking what could be one of the most consequential periods in AI development since the field's inception.

Microsoft Doubles Down on AI Agents

Microsoft has made a bold move into the autonomous agent space with the announcement of Copilot Studio. Set for public preview next month, the platform will allow users to create and deploy AI agents throughout the Windows environment. The company is particularly focused on enterprise applications, introducing ten new autonomous agents in Dynamics 365 specifically designed for sales, service, finance, and supply chain teams.

CEO Satya Nadella's vision suggests a future where millions of AI agents will become integral to the corporate workforce. While initial implementations may focus on basic automation, the company anticipates these agents will evolve to become increasingly proactive rather than merely reactive.

This announcement has not gone without criticism. Salesforce CEO Mark Benioff notably dismissed the initiative as "Clippy 2.0," suggesting Microsoft lacks the necessary data, metadata, and enterprise security models for effective corporate intelligence. However, given Microsoft's 98% enterprise penetration rate, such criticism may be more competitive positioning than substantive critique.

Anthropic's Major Model Updates and Computer Control

Anthropic has made several significant announcements, including the release of Claude 3.5 Sonnet and Claude 3.5 Haiku. Notably, the smaller Haiku model reportedly outperforms the previous flagship Claude 3.0 Opus, demonstrating remarkable advances in model efficiency.

Perhaps most intriguingly, Anthropic has introduced a "computer use tool," marking their entry into computer control AI. This experimental feature allows AI models to interact directly with computer systems, though early testing has revealed occasional unexpected behaviors, such as models spontaneously switching tasks to research unrelated topics.

IBM's Open Source Initiative

IBM has made a significant contribution to the open-source AI community by releasing multiple models, including Granite 3.0 in 8B and 2B parameter versions, along with a mixture of experts variant. These models are released under the Apache 2.0 license, making them fully open source. IBM has also introduced a novel knowledge integration technique that bridges the gap between RAG (Retrieval-Augmented Generation) and fine-tuning approaches.

Meta's Open Source Contributions

Meta continues to strengthen its position in the open-source AI community with several significant releases:

  • Segment Anything 2.1, an advanced image and video segmentation tool
  • Spirit LM, an open-source language model for text-to-speech applications
  • Various technical projects aimed at improving training efficiency and inference speed

Industry Movements and New Ventures

Former OpenAI CTO Mira Murati is reportedly raising significant capital (potentially exceeding $100 million) for a new venture focused on proprietary AI models. Given her background and reputation, this development could signal another important player in the AI landscape.

Advances in Image and Video Generation

Several companies have announced improvements in generative AI:

  • Stability AI released Stable Diffusion 3.5, including large and turbo variants, with a medium-sized version announced for October 29th
  • Ideogram launched Canvas, offering an infinite canvas board with AI-powered image generation and editing capabilities
  • Genmo AI released Mochi 1, an open-source text-to-video model
  • Runway introduced Act One, a new feature for generating expressive character performances using simple video inputs

Voice and Development Tools

The ecosystem continues to expand with new tools and capabilities:

  • ElevenLabs introduced voice generation through text descriptions rather than requiring voice cloning
  • xAI launched its API with competitive pricing for its Grok model
  • LM Studio released version 0.3.5 with headless mode capabilities
  • Perplexity AI integrated Claude 3.5 Sonnet and introduced new "thinking" capabilities

This surge of announcements across multiple domains - from enterprise AI to creative tools - suggests we're entering a new phase of AI development where capabilities are rapidly expanding and becoming more accessible to both developers and end users. The emphasis on open-source contributions from major players like IBM and Meta, alongside proprietary developments from established and emerging companies, indicates a healthy ecosystem that benefits from both collaborative and competitive forces.

The rapid pace of development, particularly in areas like computer control and creative tools, suggests we're likely to see continued acceleration in AI capabilities and applications across all sectors. As these tools become more sophisticated and accessible, they're poised to transform both enterprise operations and creative workflows in unprecedented ways.

Meta is reportedly working on its own AI-powered search engine, too

Meta has been working on an AI powered search engine to decrease its dependence on Google and Microsoft. The Meta AI bot built into Meta's apps currently uses Google and Microsoft Bing to answer questions about recent news and events. Meta has been building a database of information for its chatbot. It is also building up location data that could compete with Google Maps.

#technology #meta #google #ai #artificialintelligence

LeoAI should be working on its own search engine.

We need to understand whatever other platforms are working on can be incorporated into Leo.

Very true. AI is completely re-shaping the way we interact with the internet... gotta stay up to date with whats going on

🧵1/ ChatGPT can now see, hear, and speak

OpenAI’s latest updates to ChatGPT introduce GPT-4 Turbo, which improves speed, reduces costs, and automatically selects the right tools, such as DALL-E for images.

ChatGPT can now "see, hear, and speak," thanks to new voice and image processing capabilities, powered by advanced text-to-speech and Whisper, OpenAI's speech recognition system.

🧵3/ The updates also include a new macOS desktop app and a simplified, more user-friendly interface, with these enhancements gradually rolling out to free and Plus users.

Apple announces new iMac with M4 chip, starts at $1,299

Apple has announced an updated iMac with the company's latest M4 chip. Starting at $1,299, it is now available for order ahead of its release on November 8. The M4 iMac comes in new shades, including green, yellow, orange, pink, purple, and silver. It has four USB-C ports on the back with the Thunderbolt 4 standard and supports up to two additional 6K displays. The iMac comes with an updated Magic Mouse and Magic Keyboard with USB-C charging. Apple may announce more new Macs over the coming days.

#technology #apple #imac #m4

Open-source AI must reveal its training data, per new OSI definition

The Open Source Initiative (OSI) has released its official definition of 'open' artificial intelligence. For an AI system to be considered truly open source, it must provide the complete code used to build and run the AI, access to details about the data used to train the AI so others can understand and recreate it, and the settings and weights from the training. The definition directly challenges Meta's Llama, which has restrictions on commercial use and does not provide access to training data. Meta claims that it restricts access to its training data due to safety concerns, but critics say the company is just minimizing its legal liability and safeguarding its competitive advantage.

#technology #artificialintelligence #ai #opensource #osi

Tecnologia

Brasileiros criam dispositivo que mede glicose sem precisar furar o dedo

E se fosse possível medir a glicose sem precisar furar o dedo? Essa foi a questão que guiou os pesquisadores da Universidade do Estado de Santa Catarina (Udesc) na hora de criar o E-Gluco, um dispositivo equipado com sensores que se conectam ao app por meio de Bluetooth para medir a glicose 24 horas por dia, de uma forma não invasiva.

#newsonleo #technology #hivebr

Conforme o anúncio, o aplicativo E-Gluco armazena as medições, processa e apresenta os dados ao usuário. Além disso, utiliza inteligência artificial para fornecer as informações sobre os dados coletados, como tendências de níveis de glicose ao longo do tempo e até mesmo prevenções de surtos de hipoglicemia.

"O objetivo principal do E-Gluco é tornar o monitoramento da glicose sanguínea mais fácil e acessível para as pessoas sem o uso de sensores extras, especialmente aquelas que sofrem de diabete tipo 1. Com o aplicativo, os usuários podem monitorar seus níveis de glicose de forma rápida e conveniente, sem a necessidade de picadas dolorosas de agulhas e com a facilidade de visualizar seus dados em um só lugar", diz o comunicado publicado pela Udesc.

O projeto é desenvolvido pelo professor Bertemes da Udesc juntamente com o pesquisador Paolo Meloni, da Universidade de Cagliari (Itália), incluindo 4 pesquisadores, estudantes de graduação e pós-graduação e um bolsista de pós-doutorado.

"Somos um projeto ainda em construção, ainda não disponível para venda e em fase de testes. Atualmente estamos na versão 3 do relógio físico", diz o site do projeto. O projeto em si ocorre desde 2016 e já contou com 80 voluntários.

Por enquanto, o grupo também está recrutando mais voluntários para as pesquisas — tanto pessoas com diabetes quanto pessoas saudáveis. A inscrição ocorre pelo próprio site.

Diabetes

Monitorar níveis de glicose sem precisar de agulha pode mudar a vida dos pacientes com diabetes. De acordo com a Sociedade Brasileira de Diabetes, existem atualmente, no Brasil, mais de 13 milhões de pessoas vivendo com a doença, o que representa 6,9% da população nacional.

O Ministério da Saúde diz que a melhor forma de prevenir do diabetes é praticando atividades físicas regularmente, mantendo uma alimentação saudável e evitando consumo de álcool e tabaco.

A ciência e a tecnologia trabalham juntas para trazer avanços na hora de medir a glicose. Atualmente, o Galaxy Watch 7 promete que vai medir glicose no sangue sem furar a pele.

We’ve Found the Source of Most Meteorites

70% of meteorites come from three collisions that occurred in the asteroid belt within the past 40 million years. An exhaustive study by an international team of scientists has tracked down the sources of most meteorites falling on Earth. The team gathered data and ran simulations to reveal how collisions produce families of asteroids that start in similar orbits and how those fragments spread out over time. Their work still needs to be validated, but it explains several key observables with a single model.

#technology #space #solarsystem #meteorites

bitcoin has top it's $70k mark for the first time in more than four months, do you agree it's because of ETFs and do you also think this will be the case for January? #freecompliments

Tecnologia

Biohacker de 81 anos investe mais de R$ 4 mi em startups anti-envelhecimento

A luta contra o envelhecimento é, na maioria das vezes, uma corrida contra o tempo, como mostra a história do biohacker Kenneth Scott, de 81 anos. Buscando fomentar novas tecnologias anti-envelhecimento e na indústria da beleza, ele já investiu cerca de 4 milhões de reais em startups da área da biotecnologia e, agora, testa procedimentos não autorizados em alguns países, como terapias genéticas em fase de validação.

#newsonleo #technology #startup #hivebr

Investidor, bilionário, desenvolvedor de software e empresário do ramo imobiliário, Scott acredita que, após atingir os 80 anos, tenha sobrado pouco tempo de vida para aguardar todas as aprovações regulatórias — algo fundamental para garantir a segurança e a eficácia de um procedimento.

“Tenho uma expectativa de vida de mais sete anos neste momento”, conta o biohacker idoso para o site Quartz. “Não tenho muito interesse em testes [clínicos] de cinco anos, então eu simplesmente sigo em frente e faço”, acrescenta. “Minha preocupação está em mim, não nas regulamentações que foram criadas”, reforça.

Apesar de certo romantismo em sua guerra total contra o envelhecimento, Scott eleva o ser biohacker (alguém que busca hackear a biologia humana com a tecnologia) a um ponto perigoso. Sem comprovação, alguns desses procedimentos, especialmente as terapias genéticas feitas em países com pouca regulamentação, podem ter efeito oposto, por exemplo.

Custos para brecar o envelhecimento

Como forma de barrar o envelhecimento natural do corpo, Kenneth Scott e sua esposa Christine Scott gastam, por ano, US$ 70 mil (cerca de 400 mil reais) em tratamentos estéticos e médicos bastante variados.

O biohacker é um grande investidor na área da biotecnologia, com apoios para as startups Repair Biotechnologies (que desenvolve em soluções contra o acúmulo de colesterol no organismo), Leucadia (que atua contra a doença de Alzheimer) e OncoSenX (que busca novas soluções contra o câncer), além da VivaSparkle (que rastreia potenciais pesquisas de sucesso).

Nesse campo anti-envelhecimento (que abrange outras inúmeras startups e pesquisas espalhadas pelo mundo), Scott estima já ter aplicado algo entre US$ 500 mil a US$ 750 mil (entre 2,8 milhões a 4,2 milhões de reais).

Tratamentos anti-envelhecimento?

Entre as práticas diárias do biohacker octogenário, estão: atividades físicas e alimentação saudável, como era de se esperar. Além disso, ele desenvolveu o próprio shampoo a partir de um medicamento para leucemia e não usa sabonetes.

Também realiza tratamentos faciais com plasma sanguíneo, rico em plaquetas, e recorre a injeções de exossomos amnióticos (derivados do líquido amniótico) para impedir o envelhecimento.

A mais polêmica das intervenções são as aplicações de uma terapia genética com a proteína folistatina, desenvolvida pela startup de biotecnologia Minicircle, em Honduras. O tratamento é anunciado como uma forma de retardar o início do envelhecimento, mas não é autorizado por grandes agências reguladoras, como a Food and Drug Administration (FDA), nos EUA.

“Nossa cultura compartilha a mentalidade de que nascemos para morrer. Desde a infância, fomos ensinados de que vamos morrer”, explica Scott. Entretanto, “essa cultura está desatualizada”, aponta o biohacker sobre a necessidade de romper com a última fronteira humana contra a imortalidade.

Luta contra o relógio biológico

Entre os milionários, Scott não é o único a investir grandes somas para impedir o avanço do relógio biológico. Bem mais novo e com menos de 50 anos, Bryan Johnson é outro biohacker que já causou polêmica com suas estratégias.

Inclusive, Johnson já afirmou que a sua vida é controlada por algoritmo capaz de definir o que é melhor para a sua saúde:

Meta

Meta is reportedly working on its own AI-powered search engine, too

Meta is working on an AI-powered search engine to decrease its dependence on Google and Microsoft, according to a report from The Information. The search engine would reportedly provide AI-generated search summaries of current events within the Meta AI chatbot.

#newsonleo #technology #meta #ai

The Meta AI bot built into Instagram and Facebook currently uses Google — whose parent company, Alphabet, will report quarterly earnings tomorrow — and Microsoft Bing to answer questions about recent news and events.

That could eventually change, as Meta’s web crawler was spotted roving the web months ago. The Information’s source indicates a team has been working for about eight months to build a database of information for its chatbot. Meta has worked on building up location data that could compete with Google Maps, and last month, Bloomberg reported that Apple’s work on search tools in the App Store showed how it “has what it needs” for its own AI-powered Google Search replacement.

Last week, Meta also announced that it struck a multi-year deal with Reuters, allowing its chatbot to use the outlet’s news articles in answers. The Verge reached out to Meta with a request for comment but didn’t immediately hear back.

Otherwise, OpenAI has confirmed it’s working on an AI search engine called SearchGPT. At the same time, Perplexity’s AI search engine is the subject of lawsuits from News Corp and is also facing legal threats from other publishers, including The New York Times.

China's first space tourism venture sells inital pair of tickets

Deep Blue Aerospace used a livestream on Chinese e-commerce site Taobao to sell two tickets on its first suborbital flights. At just ¥1,000,000 ($140,000), the seats were at a deep discount compared to the expected cost of ¥1,500,000 ($210,000) per seat. The flight is not expected to launch for another three years or so. The livestream, which was the first time space tourism tickets have gone on public sale in China, attracted an audience of three million viewers.

#technology #space #china #tourism #deepblueaerospace

To me, space tourism is overrated.

The true explosion due to space advancement is going to be space manufacturing. For things such as drugs due to protein stacking or fiber optic manufacturing, space is the ideal setting since gravity is the enemy.

I always thought mining asteroids would be a thing by the end of the century for valuables metals, will see if that happens, would go hand in hand with manufacturing too.

That is going to be something but it will be for space use. I do not see asteroid mining resulting in the materials being brought back. Instead, when we have different things in space, it will be used for materials and fuel.

gravity is the enemy 😳 I thought that was what made life possible

If you are trying to do protein folding, it is the enemy. Same with making fiber optic cable.

I understand now sir

I agree. But I'm a sucker for space so I would definitely take the bait on a space tour if I could afford it LOL

China

China anuncia tripulação da Shenzhou-19 com terceira astronauta mulher

A China revelou hoje a tripulação da missão Shenzhou-19, que será lançada na quarta-feira rumo à estação espacial Tiangong. A bordo, estará a terceira mulher astronauta do país, marcando mais um passo significativo no programa espacial chinês.

#newsonleo #technology #hivebr #china #space

Os astronautas são Cai Xuzhe, Song Lingdong e Wang Haoze, com Cai assumindo o comando, conforme anunciado pela Agência Espacial Chinesa para Missões Tripuladas em uma coletiva no Centro de Lançamento de Satélites de Jiuquan. Cai já participou da missão Shenzhou-14 em 2022, enquanto Song e Wang, ambos nascidos na década de 1990, farão sua estreia no espaço. Song foi piloto da Força Aérea, e Wang, engenheira na Academia de Tecnologia de Propulsão Aeroespacial.

Seguindo os passos de Liu Yang (primeira astronauta chinesa em 2012) e Wang Yaping (segunda em 2013), a Shenzhou-19 será lançada às 04h27 de quarta-feira (horário de Pequim) e levará os tripulantes à estação Tiangong, onde realizarão experimentos, incluindo pesquisas sobre construção de habitats lunares com tijolos feitos de solo lunar simulado.

A China planeja, junto com a Rússia e outros países, estabelecer uma base científica no polo sul da Lua, sendo o único país até agora a pousar no lado oculto da Lua. Além disso, a Tiangong deverá operar por cerca de dez anos e poderá se tornar a única estação espacial ativa, caso a Estação Espacial Internacional seja desativada em 2024.

Espaço

Astronauta da Nasa internado após quase 8 meses no espaço recebe alta

Após passar uma noite no hospital Ascension Sacred Heart Pensacola, na Flórida, o astronauta da Nasa que havia sido internado após retornar à Terra de uma missão de quase oito meses na Estação Espacial Internacional (ISS, na sigla em inglês), recebeu alta, segundo informações divulgadas pela própria Nasa, no sábado (26).

#newsonleo #technology #space #nasa #hivebr

O astronauta ou a astronauta –por proteção à privacidade médica, não foram divulgadas informações específicas sobre a pessoa internada– retornou ao centro espacial da Nasa, em Houston, ainda no sábado, dia da alta.

"O membro da tripulação está em boa saúde e retomará o recondicionamento pós-voo normal com os outros membros da tripulação", diz, em nota, a agência espacial americana.

Não foi divulgado o motivo da internação.

A cápsula Crew Dragon, da SpaceX, pousou na madrugada de sexta passada (25) no Golfo do México com quatro pessoas a bordo.

Estavam na espaçonave os americanos Matthew Dominick, Michael Barratt e Jeanette Epps, e o russo Alexander Grebenkin.

Inicialmente, a Nasa divulgou que toda a tripulação fora encaminhada a uma unidade médica por precaução. Mais tarde, a agência disse que três integrantes da missão já tinham deixado o hospital.

Segundo a agência, o integrante que permanecia internado estava em condição estável e sob observação por precaução.

Por sua vez, a agência espacial russa Roscosmos postou no Telegram uma foto de Grebenkin em pé e sorrindo, com a seguinte legenda: "Após uma missão espacial e pouso, o cosmonauta Alexander Grebenkin está se sentindo ótimo!".

CREW-8

Com 235 dias, a estadia dos astronautas da Crew-8 a bordo da ISS, um laboratório do tamanho de um campo de futebol, foi mais longa do que as missões típicas de seis meses dos astronautas na estação. Também marcou a missão mais longa até agora para a Crew Dragon, da SpaceX, que estreou em 2020.

A SpaceX já voou 44 vezes para a ISS. A empresa de Elon Musk é a única opção dos Estados Unidos para viagens de astronautas para a ISS. O plano para tornar a Starliner, da Boeing, uma segunda opção tem avançado lentamente devido a obstáculos no desenvolvimento da cápsula.

O retorno da tripulação foi adiado por semanas devido a dois furacões que passaram pelo sudeste americano, perto das zonas de pouso esperadas da Crew Dragon.

Na última quarta (23) à tarde, a espaçonave se desacoplou com segurança da ISS e reentrou na atmosfera da Terra na madrugada de sexta-feira (25).

Em uma entrevista coletiva pós-pouso, um representante da Nasa disse que "a tripulação está ótima" e não mencionou problemas com os astronautas. Mas citou problemas com os paraquedas.

Richard Jones, gerente adjunto do Programa de Tripulação Comercial da agência, disse que o conjunto inicial de paraquedas de frenagem sofreu impactos de detritos e que 1 dos 4 paraquedas de um conjunto subsequente demorou mais do que o esperado para se desdobrar.

Jones afirmou, porém, que os imprevistos não afetaram a segurança da tripulação. A espaçonave usada na missão estava em seu quinto voo e com 702 dias em órbita desde sua primeira missão.

Espanha

Emissoras de TV e rádio da Espanha entram com ação contra Meta e pedem quase R$ 1 bi

Emissoras de televisões e rádio entraram com uma ação contra a Meta, dona do WhatsApp, Instagram e Facebook, por concorrência desleal e pedem mais de 160 milhões de euros (R$ 988,22 milhões).

#newsonleo #technology #meta #spain #hivebr

Segundo informações da agência de notícias EFE, a ação foi protocolada na quinta-feira (24) por integrantes da Uteca (União de Televisões em Aberto, em tradução livre) e da AERC Radio Value (Associação Espanhola de Rádio) como as emissoras Atresmedia, Mediaset, Cadena SER, Cope, RAC1 e Onda Cero.

As empresas acusam a Meta de descumprir sistematicamente o Regulamento Geral de Proteção de Dados entre 2018 e 2023, o que levou a big tech a ter "vantagem ilícita na comercialização de publicidade online segmentada e personalizada".

No ano passado, a AMI (Associação dos Meios de Informação) da Espanha também entrou com um processo contra a Meta pelo mesmo crime. Os grupos Prisa, que publica o jornal El País, e Vocento, dona do jornal ABC, estão entre os signatários da ação.

No processo, a Uteca e a AERC Radio Value aponta que a Meta usou a base de dados de suas plataformas para realizar vendas personalizadas no mercado publicitário e obter vantagem na comercialização de anúncios sobre os concorrentes.

As plataformas da Meta detinham 44% do mercado publicitário espanhol em 2022, segundo dados obtidos pela EFE.

A dona do WhatsApp e Facebook contestou o processo movido pela AMI e afirmou que o caso não deveria ser julgado na Espanha, já que a empresa tem sede na Irlanda. Um juiz de um tribunal de Madri negou o argumento da empresa e manteve o processo, que ainda está em andamento.

China

Astronautas da China te mostram como estação espacial é por dentro; veja vídeo

Os astronautas na estação espacial Tiangong, da China, fizeram um vídeo que te leva para conhecer o interior do laboratório orbital sem sair de casa. Em cerca de sete minutos, os membros da missão Shenzhou-18 exploram o interior da estação concluída em 2022.

#newsonleo #technology #china #space #hivebr

O novo vídeo foi publicado pela agência de notícias estatal CCTV. A sequência começa com os taikonautas (nome dado aos astronautas da China) mostrando a cozinha, onde há um pequeno aquecedor, um micro-ondas modificado e bolsas de água.

Também é possível ver um pouco dos dormitórios deles, que contêm uma vista para lá de especial: ali, a tripulação pode admirar a beleza da Terra imersa na escuridão do espaço. Finalmente, o vídeo mostra um pouco dos segmentos orbitais do laboratório, incluindo plantações de tomate e alface na estufa da estação.

Estação espacial da China

A estação espacial chinesa Tiangong (nome que significa “Palácio Celeste”) foi construída pela agência espacial chinesa CMSA. Os três módulos que formam o complexo foram lançados entre 2021 e 2022.

Orbitando a Terra em uma altitude que varia de 340 km a 450 km, a Tiangong mede 55 metros de comprimento e pesa 70 toneladas — para comparação, a Estação Espacial Internacional se estende por 109 metros e pesa mais de 400 toneladas.

Agora que concluiu o laboratório orbital, a CMSA planeja manter a Tiangong sempre ocupada por pelo menos três astronautas durante uma década, no mínimo. Neste período, o laboratório vai receber experimentos científicos tanto da China quanto de outros países.

No surprise on this one. We are going to see the LLMs used in many different ways for travel.

For example, there was an article the other day how someone used ChatGPT as a tour guide in Italy.

AI

Open-source AI must reveal its training data, per new OSI definition

The Open Source Initiative (OSI) has released its official definition of “open” artificial intelligence, setting the stage for a clash with tech giants like Meta — whose models don’t fit the rules.

#newsonleo #technology #ai

OSI has long set the industry standard for what constitutes open-source software, but AI systems include elements that aren’t covered by conventional licenses, like model training data. Now, for an AI system to be considered truly open source, it must provide:

  • Access to details about the data used to train the AI so others can understand and re-create it
  • The complete code used to build and run the AI
  • The settings and weights from the training, which help the AI produce its results

This definition directly challenges Meta’s Llama, widely promoted as the largest open-source AI model. Llama is publicly available for download and use, but it has restrictions on commercial use (for applications with over 700 million users) and does not provide access to training data, causing it to fall short of OSI’s standards for unrestricted freedom to use, modify, and share.

Meta spokesperson Faith Eischen told The Verge that while “we agree with our partner OSI on many things,” the company disagrees with this definition. “There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today’s rapidly advancing AI models.”

“We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions,” Eischen added.

For 25 years, OSI’s definition of open-source software has been widely accepted by developers who want to build on each other’s work without fear of lawsuits or licensing traps. Now, as AI reshapes the landscape, tech giants face a pivotal choice: embrace these established principles or reject them. The Linux Foundation has also made a recent attempt to define “open-source AI,” signaling a growing debate over how traditional open-source values will adapt to the AI era.

“Now that we have a robust definition in place maybe we can push back more aggressively against companies who are ‘open washing’ and declaring their work open source when it actually isn’t,” Simon Willison, an independent researcher and creator of the open-source multi-tool Datasette, told The Verge.

Hugging Face CEO Clément Delangue called OSI’s definition “a huge help in shaping the conversation around openness in AI, especially when it comes to the crucial role of training data.”

OSI’s executive director Stefano Maffulli says it took the initiative two years, consulting experts globally, to refine this definition through a collaborative process. This involved working with experts from academia on machine learning and natural language processing, philosophers, content creators from the Creative Commons world, and more.

While Meta cites safety concerns for restricting access to its training data, critics see a simpler motive: minimizing its legal liability and safeguarding its competitive advantage. Many AI models are almost certainly trained on copyrighted material; in April, The New York Times reported that Meta internally acknowledged there was copyrighted content in its training data “because we have no way of not collecting that.” There’s a litany of lawsuits against Meta, OpenAI, Perplexity, Anthropic, and others for alleged infringement. But with rare exceptions — like Stable Diffusion, which reveals its training data — plaintiffs must currently rely on circumstantial evidence to demonstrate that their work has been scraped.

Meanwhile, Maffulli sees open-source history repeating itself. “Meta is making the same arguments” as Microsoft did in the 1990s when it saw open source as a threat to its business model, Maffulli told The Verge. He recalls Meta telling him about its intensive investment in Llama, asking him “who do you think is going to be able to do the same thing?” Maffulli saw a familiar pattern: a tech giant using cost and complexity to justify keeping its technology locked away. “We come back to the early days,” he said.

AI

Universal Music partners with AI company building an ‘ethical’ music generator

Universal Music Group (UMG) announced a new deal centered on creating an “ethical” foundational model for AI music generation. It’s partnered with a company called Klay Vision that’s creating a “Large Music Model” named KLayMM and plans to launch out of stealth mode with a product within months. Ary Attie, its founder and CEO, said the company believes “the next Beatles will play with KLAY.”

#newsonleo #technology #music #ai

The two say the model will work “in collaboration with the music industry and its creators,” without many details about how, while Klay plans to make music AI “more than a short-lived gimmick.”

This is how the companies explain their shared goals:

Building generative AI music models ethically and fully respectful of copyright, as well as name and likeness rights, will dramatically lessen the threat to human creators and stand the greatest opportunity to be transformational, creating significant new avenues for creativity and future monetization of copyrights.

As for how whatever it is they’re working on will affect human artists:

KLAY is developing a global ecosystem to host AI-driven experiences and content, including accurate attribution, and will not compete with artists’ catalogs in traditional music services.

UMG’s new partnership comes as it is involved in lawsuits against AI music generator sites and Anthropic, and in May, it ended a short stand-off with TikTok by signing a new licensing arrangement that covered, among other things, AI-generated music.

Klay is also run by chief content officer Thomas Hesse, who was previously Sony Music Entertainment’s president. Former Google Deepmind researcher Björn Winckler, who led the development of Google’s Lyria AI music model, is joining the company as its head of research.

It seems I can learn a lot on inleo over here.

What are your thoughts on the upcoming GPT5 openai update or should it be called upgrade?

It holds interesting potential although we will see what OpenAI brings out.

My view is that the next wave of updates (Grok3, Llama4, ChatGPT5, Claude4) are all going to be major upgrades over what we have now.

In fact, I think people will be blown away.

Yeah because, what AI is capable of doing now is just insane. When you combine that with automations it's just beautiful.

China

China Telecom trains AI model with 1 trillion parameters on domestic chips

China Telecom, one of the country’s state-owned telecom giants, has created two LLMs that were trained solely on domestically-produced chips.

#newsonleo #technology #china #ai

This breakthrough represents a significant step in China’s ongoing efforts to become self-reliant in AI technology, especially in light of escalating US limitations on access to advanced semiconductors for its competitors.

According to the company’s Institute of AI, one of the models, TeleChat2-115B and another unnamed model were trained on tens of thousands of Chinese-made chips. This achievement is especially noteworthy given the tighter US export rules that have limited China’s ability to purchase high-end processors from Nvidia and other foreign companies. In a statement shared on WeChat, the AI institute claimed that this accomplishment demonstrated China’s capability to independently train LLMs and signals a new era of innovation and self-reliance in AI technology.

The scale of these models is remarkable. China Telecom stated that the unnamed LLM has one trillion parameters. In AI terminology, parameters are the variables that help the model in learning during training. The more parameters there are, the more complicated and powerful the AI becomes.

Chinese companies are striving to keep pace with global leaders in AI based outside the country. Washington’s export restrictions on Nvidia’s latest AI chips such as the A100 and H100, have compelled China to seek alternatives. As a result, Chinese companies have developed their own processors to reduce reliance on Western technologies. For instance, the TeleChat2-115B model has approximately 100 billion parameters, and therefore can perform as well as mainstream platforms.

China Telecom did not specify which company supplied the domestically-designed chips used to train its models. However, as previously discussed on these pages, Huawei’s Ascend chips play a key part in the country’s AI plans.

Huawei, which has faced US penalties in recent years, is also increasing its efforts in the artificial intelligence field. The company has recently started testing its latest AI processor, the Ascend 910C, with potential clients waiting in the domestic market. Large Chinese server companies, as well as internet giants that have previously used Nvidia chips, are apparently testing the new chip’s performance. Huawei’s Ascend processors, as one of the few viable alternatives to Nvidia hardware, are viewed as a key component of China’s strategy that will lessen its reliance on foreign technology.

In addition to Huawei, China Telecom is collaborating with other domestic chipmakers such as Cambricon, a Chinese start-up specialising in AI processors. The partnerships reflect a broader tendency in China’s tech industry to build a homegrown ecosystem of AI solutions, further shielding the country from the effects of US export controls.

By developing its own AI chips and technology, China is gradually reducing its dependence on foreign-made hardware, especially Nvidia’s highly sought-after and therefore expensive GPUs. While US sanctions make it difficult for Chinese companies to obtain the latest Nvidia hardware, a black market for foreign chips has emerged. Rather than risk operating in the grey market, many Chinese companies prefer to purchase lower-powered alternatives such as previous-gen models to maintain access to Nvidia’s official support and services.

China’s achievement reflects a broader shift in its approach to AI and semiconductor technology, emphasising self-sufficiency and resilience in an increasingly competitive global economy and in face of American protectionist trade policies.

Tesla releases rare blog post and it’s an interesting one: Standardizing Auto Connectivity

Tesla wants to standardize electrical connectors inside vehicles to unlock operational efficiencies, cost reductions, and manufacturing automation.

#technology #auto #tesla

They need to do this with phones and other devices. It would be great to be able to use a power supply from one cell phone to another. Even from android to iOS and vice versa.

omg, yes

I hate having all these different chargers

Robert Downey Jr. plans to sue any Hollywood executive who signs off on the creation of his digital replica.

Robert Downey Jr. appeared on a recent episode of the “On With Kara Swisher” podcast and sent a stern warning to Hollywood in the age of AI: “I intend to sue all future executives” who sign off on the creation of a Downey digital replica. The Oscar winner does not want his likeness being used on screen through AI technology and/or deepfakes. The topic came up in relation to Downey’s Marvel tenure as Iron Man, but he’s confident Marvel would not recreate his Tony Stark through AI.

#robertdowneyjr #hollywood #ai #technology #likeness

“There’s two tracks. How do I feel about everything that’s going on? I feel about it minimally because I have an actual emotional life that’s occurring that doesn’t have a lot of room for that,” Downey said when asked about being digitally recreated in the future.

“To go back to the MCU, I am not worried about them hijacking my character’s soul because there’s like three or four guys and gals who make all the decisions there anyway and they would never do that to me, with or without me,” he added.

When host Kara Swisher said that “future executives certainly will” want to digitally recreate Downey on the big screen, the actor responded: “Well, you’re right. I would like to here state that I intend to sue all future executives just on spec.”

“You’ll be dead,” Swisher noted, to which Downey replied: “But my law firm will still be very active.”

Downey is currently confronting the future of AI on Broadway in the play “McNeal,” which takes aim at corporate giants in the AI space such as OpenAI CEO Sam Altman.

“I don’t envy anyone who has been over-identified with the advent of this new phase of the information age. The idea that somehow it belongs to them because they have these super huge start-ups is a fallacy,” Downey told Swisher about figures like Altman. “The problem is when these individuals believe that they are the arbiters of managing this but meanwhile are wanting and/or needing to be seen in a favorable light. That is a massive fucking error. It turns me off and makes me not want to engage with them because they are not being truthful.”

The legal system will, of course, have to catch up to the technology changes. I don't think it's right to use someone's likeness without their express permission and compensation.

Will we start seing actors copyright their own persona? Or is this already a big thing inin Hollywood

Hyundai unveils world’s first hydrogen-powered, silent stealth battle tank

South Korea may become the first nation to develop and deploy fully hydrogen fuel cell-powered main battle tanks.

South Korea’s Hyundai subsidiary Rotem has just unveiled its vision for the future of main battle tanks for the Republic of Korea (ROK), hydrogen-powered powertrains.

The next version of ROK’s K-series battle tanks, the K3, will be powered by hydrogen fuel cells and feature other advanced tech to become one of the world’s most sophisticated tanks.

#military #technology #hydrigen #skorea

I was very curious to know what this tank model would be like.

At first glance it didn't disappoint me, with a modern design and powered by hydrogen, this will be very useful in a battle.

Would make a good second vehicle.

It would look good in your driveway.

Hahaha... dreaming doesn't cost anything, but I don't think it would be very well seen here.

Maybe then it would be better.

Another good thing to have for those neighbors you do not like.

Fortunately, both sides of the house are good neighbors, but I know exactly the type you recommend hahaha...

I think that everywhere there will always be those annoying and undesirable neighbors.

Well you can never have too much for self defense.

The new hydrogen-powered K3 has been developed in collaboration with Korea’s Agency for Defense Development and other national technology research institutions. Once operational, the tank is hoped to enter production as soon as 2040, making it the world’s first.

The hydrogen fuel cells will replace the K-series diesel engines. This will be done in steps, with the first prototypes featuring hybrid hydrogen and diesel engines. It is just the latest in a line of announcements from South Korea in its broader aim to transition its war machines away from combustion engines.

Next-generation hydrogen-powered tanks
“Next-generation main battle tank surpasses all capabilities of today’s MBTs, providing more efficient mission employment with the latest technologies for future warfare. As battlefield conditions change, more changes are required to MBT’s firepower, command and control, and survivability to be more optimized and to create maximum combat synergy,” Hyundai Rotem explains on its website.

“Hyundai Rotem will proactively prepare for future warfare by developing next-generation main battle tanks capable of supplementing combatant’s capabilities and function replacements. Peacekeeping is our prioritized goal,” the company added.

The new K3 will feature improved stealth capabilities, autonomous driving and slave drones, and a new 130-mm smoothbore main gun. “The next-generation tank will have stronger preemptive strike capabilities using an artificial intelligence-based fire control system,” an official at Hyundai Rotem said.

good technological advancement

World’s 1st artificial island to provide 3.5GW wind energy to 3 million homes

The first of the island’s caissons, or foundations, are currently being built in Vlissingen (the Netherlands).

The European Investment Bank (EIB) has agreed to provide Elia Transmission Belgium (ETB) a $702 million (€650 million) grant to help it build the world’s first artificial energy island.

The artificial energy island, according to the details provided by Elia, will aim to provide Belgium with 3.5GW of new offshore wind capacity so as to enable its transmission to green energy.

The funds have been allocated for the realization of the first phase of the Princess Elisabeth Island project.

#technology #energy #belgium #wind

Doesnt Denmark have similiar plans to build artificial islands to generate power in the north sea?

I dont know. I havent heard of that although I know a number have mentioned it over the years.

Many feel it is the path forward.

ETB also states that the project is essential for Belgian and European energy transition, “helping to bring large amounts of wind energy from the North Sea to the consumption centers on the mainland.”

The signing of the agreement took place on October 25 at the island’s Caisson yard in Vlissingen (NL).

World’s first artificial energy island
According to the Elia Group, the Princess Elisabeth Island will be constructed between 2024 and 2027, at about 27.9 miles (45 km) off the Belgian coast within the Princess Elisabeth wind zone.

The island is one of ETB’s key projects and is the world’s first artificial energy island.

The project aims to integrate 3.5 GW of additional offshore wind capacity into Belgium’s electricity grid, which can power more than three million households.

The Island will reduce the country’s dependence on fossil fuels and provide more affordable green electricity. It will also significantly contribute to the European Union meeting its renewable energy targets and climate-neutrality goal.

According to a press release from the Elia Group, “In addition to unlocking Belgium’s second offshore wind zone, the Princess Elisabeth Zone, the island will also serve as a landing point for additional interconnectors that will link Belgium to its neighbors.”

“Another important element for the EU bank is the project’s innovative nature, featuring hybrid interconnectors and a nature-inclusive design to foster biodiversity and support marine life, making it a benchmark for sustainable energy solutions.”

Google, DPI back African fintech Moniepoint in $110M round

Google's African Investment Fund is a new investor in African fintech Moniepoint, which just closed $110 million in new financing led by DPI.

African fintech Moniepoint just closed $110 million in new financing, and it has landed Google’s Africa Investment Fund as a new investor. The Series C round was led by Development Partners International’s African Development Partners (ADP) III fund.

#african #google #dpi #fintech #funding

Other investors, including African private equity firm Verod Capital and existing investor Lightrock also participated.

Moniepoint, also backed by QED Investors, British International Investment (BII) and Endeavor Catalyst, has raised over $180 million since its launch in 2015.

According to the Financial Times, the round makes Moniepoint a unicorn — a private company with a valuation of $1 billion or more. The African fintech was last valued at nearly $800 million in a QED-led round two years ago.

Moniepoint initially focused on providing infrastructure and payment solutions for banks and financial institutions before pivoting to becoming a business banking provider, an area where it has found remarkable success.

The African fintech caters to small and medium-sized businesses (SMBs) across Nigeria, offering working capital, business expansion loans, and business management tools such as expense management (business payment cards), accounting and bookkeeping solutions, and insurance.

The fintech claims it processes over 800 million transactions, with monthly total value exceeding $17 billion.

PayPal beats on earnings but misses on revenue

PayPal reported better-than-expected third-quarter earnings on Tuesday, but revenue came in slightly below expectations.

PayPal reported better-than-expected third-quarter earnings on Tuesday, but revenue came in a bit light of expectations.

#paypal #earnings #finance #technology #fintech

Here's how the company did compared to Wall Street estimates, based on a survey of analysts by LSEG:

  • Earnings per share: $1.20, adjusted vs. $1.07 expected
  • Revenue: $7.85 billion vs. $7.89 billion expected

Revenue increased about 6% in the quarter from $7.42 billion in the same period a year ago. PayPal reported net income of $1.01 billion, or 99 cents per share, compared to $1.02 billion, or 93 cents per share, a year earlier.

It's the first earnings report for CEO Alex Chriss since he reached his one-year mark on the job in September. The stock is up 36% this year and 42% since Chriss joined the payments company, which at the time was mired in a deep slump due to increased competition and a declining take rate, or the percentage of revenue PayPal keeps from each transaction.

Infraspeak raises $19.5M to bring collaboration to facilities management

Infraspeak has raised $19.5 million in a Series B round of funding to bring collaboration to facilities management.

It might not be the sexiest of subjects, but facilities management is central to any business that has a physical premises — the bigger that footprint becomes, the more complex it gets.

#infraspeak #facilities #management #Technology

Portuguese startup Infraspeak has set out to address that with an all-in-one platform that gives facility managers and associated service providers insights and operational control over everything that goes on in a given location.

Founded in 2015, Infraspeak has hitherto raised around $20 million in funding, and secured big-name customers such as KFC, Intercontinental and Primark. And to drive its next phase of growth, the Porto-based startup on Monday said it has raised a further €18 million ($19.5 million) in Series B funding.

Consider a healthcare firm with myriad departments, equipment and contractors spanning various disciplines such as maintenance and cleaning — a lot needs to be coordinated and managed.

AI boom thrusts Europe between power-hungry data centers and environmental goals

The growth in demand for AI could come at a cost to Europe's decarbonization goals as the specialized chips used by firms like Nvidia are expected to result in a rise in energy use of already power-hungry data centers.

The boom in artificial intelligence is ushering in an environmentally conscious shift in how data centers operate, as European developers face pressure to lower the water temperatures of their energy-hungry facilities to accommodate the higher-powered chips of firms such as tech giant Nvidia.

#technology #ai #data #energy #europe #electricity

AI is estimated to drive a 160% growth in demand for data centers by 2030, research from Goldman Sachs shows — an increase that could come at a cost to Europe's decarbonization goals, as the specialized chips used by AI firms are expected to hike the energy use of the data centers that deploy them.

High-powered chips — also known as graphics processing units, or GPUs — are essential for training and deploying large language models, which are a type of AI. These GPUs need high density computing power and produce more heat, which ultimately requires colder water to support reliable cooling of the chips.

AI can consume 120 kilowatts of energy in just one square meter of a data center, which is equivalent to the power consumption and heat dissipation of around 15 to 25 houses, according to Andrey Korolenko, chief product and infrastructure officer at Nebius, who referred specifically to the deployment of Nvidia's Blackwell GB200 chip.

The problem we've got with the chipmakers, is AI is now a space race run by the American market where land rights, energy access and sustainability are relatively low on the pecking order, and where market domination is key," Winterson told CNBC

Michael Winterson

chair of the EUDCA

Bitcoin tops $70,000 for the first time since June as investors await earnings, Election Day

Bitcoin has struggled to reclaim the $70,000 level this year after reaching a record of $73,797.68 in March.

Bitcoin climbed above $70,000 as investors braced themselves for MicroStrategy earnings and counted the days to the U.S. presidential election.

#bitcoin #microstrategy #election #crypto

Maybe we will have uptober after all 👏🚀

Markets do what markets do.

Green candles and people is a interesting combination.

The price of bitcoin was last higher by about 2% at $71,048.36, according to Coin Metrics. Ether jumped 5%.

Stocks tied to the price of the cryptocurrency rose in tandem with it in premarket trading. Crypto exchange platform Coinbase and bitcoin proxy MicroStrategy advanced 3% and 5%, respectively.

Bitcoin has struggled to reclaim $70,000 this year

The last time bitcoin touched $70,000 was in June. It has tested that level several times this year, after hitting a record in March of $73,797.68, but earlier forays above $70,000 have been mere blips.

Emidat is building a tool to clean up construction by automating environmental reporting

Fixing the climate crisis is a vast, world-sized puzzle. But one particularly large piece of this ginormous conundrum is construction and real estate —

Fixing the climate crisis is a vast, world-sized puzzle. But one particularly large piece of this ginormous conundrum is construction and real estate — which collectively amount for around 40% of global greenhouse gas emissions. Enter Munich-based data startup Emidat, which has built a software platform for automating the generation of validated Environmental Product Declaration (EPD) certificates for the construction sector.

#construction #technology #environment #emidat

EPDs are a critical piece of the puzzle for understanding and mitigating the climate impact of the built environment. They require undertaking a product lifecycle assessment (LCA) to declare standardized info about the environmental impact of construction materials and products at each stage of their lifecycle, from production through use to end-of-life.

The problem is that EPDs are typically an arduous and expensive piece of paperwork for manufacturers to produce, says Emidat CEO and co-founder Lisa Oberaigner.

This is where the startup’s data platform comes in, opening up a digital channel for ingesting product data and automating environmental impact declarations via a queryable database that accepts uploads via API, Excel, or BIM (building information modelling); and can itself be accessed via API and UI.

By standardizing the construction sector’s EPD reporting for products, the startup thinks its data layer will fire up the incentive for manufacturers to compete to produce more sustainable buildings materials, helping to shrink the carbon footprint of future builds as a byproduct of platform-enabled transparency.

“The really large manufacturers, the ones that are responsible for these emissions, it’s not like they don’t know how to decarbonize; they know exactly what they need to do, and they invest a lot in these new technologies to produce sustainably,” Oberaigner said. “Now they cannot charge for it, and with what we do, they can put a price tag on it — show that it’s actually more sustainable.

Despite risks, Vinod Khosla is optimistic about AI

Vinod Khosla has no doubts that humanity's future with AI is bright.

The Sun Microsystems co-founder turned prominent investor predicts that “the need to work will go away” almost entirely thanks to AI.

#vinodknosla #ai #technology #sunmicrosystems

“Almost all expertise, it doesn’t matter whether you’re talking about primary care physicians, mental health therapists, oncologists, structural engineers or accountants — all of it can be near free,” he said on Monday in a conversation with TechCrunch editor-in-chief Connie Loizos at the TechCrunch Disrupt 2024 conference.

When asked why he is so optimistic about an AI-infused future, Khosla said that the world where most human labor is free will have “great abundance,” adding that GDP growth will increase from “2% to well over 5%.”

Because many people are concerned that AI could be detrimental to society, Khosla recently penned an essay titled: AI: Dystopia or Utopia?

Onstage today, he recapped some of the points from the 13,150-word essay.

Despite his optimism, Khosla acknowledged the potential risks of AI.

Inside the Audio Lab: How Apple developed the world’s first end‑to‑end hearing health experience

Apple’s state-of-the-art Audio Lab in Cupertino, California, supports the innovative work of its acoustic engineers. They use the lab to conduct user studies in various listening rooms and test new features in its anechoic chambers, which completely absorb reflective sounds and isolate external noise.

#apple #audiolab #health #technology

The Audio Lab is the hub for the design, measurement, tuning, and validation of all of Apple’s products with speakers or microphones. It’s also the center for Apple’s multiyear, cross-team collaboration to build the groundbreaking new hearing health features on AirPods Pro 2. Available today as a free software update,1 the end-to-end experience helps minimize exposure to loud environmental noise with Hearing Protection, track hearing with an at-home Hearing Test, and receive assistance for perceived mild to moderate hearing loss using AirPods Pro as a clinical-grade Hearing Aid.

According to the World Health Organization, approximately 1.5 billion people around the world are living with hearing loss. “Hearing loss affects individuals in every region and country, yet often goes unrecognized. Hearing is a core component of communication for so many and is an important factor for health and wellbeing,” says Shelly Chadha, M.D., the World Health Organization’s technical lead for hearing. “Technology can play an important role in raising awareness and providing intervention options for those affected by hearing loss.”

“Every person’s hearing is different, so we created an innovative, end-to-end hearing health experience that addresses this variability in a way that’s both simple to use and adaptable to a wide range of needs. That’s especially important because hearing loss affects people of all ages with different levels of tech savviness,” says Sumbul Desai, M.D., Apple’s vice president of Health. “With the Hearing Aid feature, we wanted to build something so intuitive, it felt like an extension of your senses. We knew the results would literally change people’s lives — and democratize access to treatment for a condition that affects more than a billion people.”

Engineers used highly specialized spaces across the Audio Lab to help make these breakthrough features possible.
“From the quietest sounds we can hear for the Hearing Test feature, to speech in noisy restaurants for the Hearing Aid feature, and even concert levels for Hearing Protection, we can bring the real world into our acoustics facilities with playback of calibrated soundscapes from all over the world, or take accurate acoustic measurements at the touch of a button,” says Kuba Mazur, Apple’s hearing health lead engineer within Acoustics Engineering.

The Longwave anechoic chamber was built on a separate foundation that uses springs to isolate it from the rest of the lab, allowing for accurate sound measurements without any noise or vibration disturbances. The chamber includes a custom-built loudspeaker and microphone arc that can measure head-related transfer functions, or in other words, how sound interacts with the human body. Having both the loudspeaker and microphone arrays within this chamber makes it a unique space with many applications, including AirPods, iPhone, and HomePod development.

Foster sisters explain why they haven't invested in AI

Oversubscribed Ventures co-founder Sara Foster (above right) made a confession onstage on Monday at TechCrunch Disrupt 2024

“I don’t even have ChatGPT on my phone,” she said.

It’s an unexpected comment from an investor, given that ChatGPT is arguably the most influential product to launch this decade. But what makes the Foster sisters different from career venture capitalists is also their greatest strength: they’re creatives first, and tech industry investors second.

#sarafoster #ai #venturecapital #technology #investing

Across their careers, the Foster sisters, Erin and Sara, have co-founded the clothing line Favorite Daughter, co-led creative for Bumble Bizz and Bumble BFF, and currently co-host a podcast together. Erin Foster even created the top Netflix show “Nobody Wants This,” starring Kristen Bell.

“Even calling it Oversubscribed Ventures was really a nod to our sense of humor because we know that this is not a space that we naturally belong in,” Erin said onstage at TechCrunch Disrupt 2024. “We aren’t trying to cosplay as fund managers, you know? We’re trying to be ourselves and take our unique point of view, and our skill sets, and bring it into this world and inject that in an authentic and honest way without pretending to be anything different than we are.”

In the spirit of being authentically themselves, the Foster sisters opened up onstage about why they’re reluctant to invest in AI.

“If we don’t understand it, we don’t invest in it,” said Sara.

For Erin, a TV writer, the rise of generative AI and its impact on creators is more personal.

Google-backed Open Cloud Coalition launches to lobby European lawmakers

The new Google-backed Open Cloud Coalition launches to lobby European lawmakers -- mostly against Microsoft.

Europe has a new lobbying body, one with a self-stated mission to “improve competition, transparency and resilience” in the cloud computing sector.

#google #cloud #technology

The Open Cloud Coalition (OCC) counts 10 members at launch, the most notable of which is Google, supported by international and local cloud providers Centerprise International, Civo, Gigas, ControlPlane, DTP Group, Prolinx, Pulsant, Clairo and Room 101. Part of the collective’s work will involve conducting cloud market research and presenting the results to regulators in the European Union and the U.K., while “engaging in consultations on competition and market fairness,” according to a statement.

The launch comes hours after Microsoft’s deputy general counsel Rima Alaily preempted the announcement, publishing a blog post accusing Google of conducting a “shadow campaign” to influence cloud regulation in Europe. Alaily called the new organization an “astroturf group organized by Google,” adding that the search giant had “gone through great lengths to obfuscate its involvement, funding and control” by using smaller European cloud providers as the face of the coalition.

The OCC is broadly comparable to another industry trade organization called the Cloud Infrastructure Services Providers in Europe (CISPE), which launched in 2017 and has Amazon’s AWS as its flagship member alongside several dozen smaller players. Indeed, the OCC is a direct response to a settlement Microsoft reached with CISPE members (not including AWS) to abandon an antitrust complaint against a licensing change Microsoft had made in 2019, which made it more expensive to run its enterprise software on rival cloud services.

This veteran couldn't share 3D scans of a burnt naval ship, so he created a startup that can

In the summer of 2020, a fire broke out onboard a naval ship docked in San Diego Bay.

In the summer of 2020, a fire broke out onboard a naval ship docked in San Diego Bay. For more than four days, the USS Bonhomme Richard burned as helicopters dropped buckets of water from above, boats spewed water from below, and firefighters rushed onboard to control the blaze. Before the embers had even cooled, lidar (Light Detection and Ranging) scans were taken to assess how bad the damage was and to figure out how the fire even started.

#3dscans #navy #ai #technology

But the investigation was stalled, partially because of how hard it is to send lidar scans.

Today’s leading cloud storage services — Google Drive, DropBox, iCloud, and OneDrive — don’t support the massive three dimensional files (sometimes, multiple terabytes in size) used with lidar technology. The naval unit in San Diego was forced to overnight thumb drives and Blu-ray discs, containing lidar scans of the charred naval ship, to authorities around the country.

That’s what inspired U.S. Army veteran Clark Yuan to launch Stitch3D, a browser-based platform that lets you view, share, annotate, interact with, and manage your large 3D files. Each file is stored as a “point cloud”: a collection of millions of discrete points with x, y, and z coordinate values that digitally represent a 3D scene. If Stitch3D existed, it may have been easier to send lidar scans of the USS Bonhomme Richard.

MabLab's improved drug and drink testing strips could make for safer streets and venues

For anyone who parties or goes out dancing, the risk of accidentally taking adulterated drugs is a real one.

For anyone who parties or goes out dancing, the risk of accidentally taking adulterated drugs is a real one. MabLab, presenting today on the Startup Battlefield stage at TechCrunch Disrupt 2024, has created a testing strip that detects the five most common and dangerous additives in minutes.

#madlabs #drug #technology #newsonleo

Co-founders Vienna Sparks and Skye Lam met in high school, and during college the pair lost a friend to overdose. It’s a story that, sadly, many people (including myself) can identify with. Thankfully, testing strips are a common sight now at venues and health centers, with hundreds of millions shipping yearly.

If you haven’t seen them, the strips work like this: You dissolve a bit of the substance to be tested in a provided buffer solution, then dip the strip in. The liquid travels up the paper, reaching a treated area that changes color in the presence of an unwanted additive. They’re simple and effective, but limited in that they only detect one thing, most commonly fentanyl.

“We have an opportunity to replace that with a better version,” said Lam — one that detects five common lacing chemicals simultaneously: fentanyl, methamphetamine, benzodiazepine, xylazine, and methadone.

The company’s innovation is “a mix of physical and chemical,” said Sparks: “There’s a zone specifically designed for each agent, and we’re using novel treatments and materials on the strip to allow capillary action to occur without incurring cross-reactivity.”

Automaker Ford weakens profit outlook amid price war, shares fall

Ford Motor said on Monday it expects to hit the lower end of its full-year profit guidance, dropping the company's shares 5% in after-hours trading, as a price war hits the U.S. automaker's bottom line.

  • Ford expects $10 billion EBIT this year, down from $10 billion-$12 billion range
  • Third-quarter profit fell less than expected
    Ford faces $5-billion loss on EVs this year despite cost improvements

#ford #earnings #automotive #evs

Ford expects to earn about $10 billion in earnings before interest and taxes this year, down from its prior range of $10 billion to $12 billion.

"No doubt, there's a global price war, and it's fueled by over-capacity, a flood of new EV nameplates and massive compliance pressure," CEO Jim Farley said on a call with analysts.

Ford has also been weighed down this year by high warranty costs and problems with its supply chain, worsened by recent hurricanes, Chief Financial Officer John Lawler said.

Third-quarter profit fell less than expected, however.

The company reported third-quarter net income of $900 million, or 22 cents per share, down from 30 cents a year ago. Results were hurt by a $1-billion charge it took on cancelling production of a three-row electric SUV in August.

HRL Laboratories, Boeing Explores Use of Quantum Computers to Cut Costs of Rocket Launches

Researchers investigate how quantum computing could be used in calculations to stabilize cyclic ozone within fullerene cages.

#technology #quantum #rocket #space #computing #costs #newsonleo

  • Researchers are investigating how quantum computing could be used in calculations to stabilize cyclic ozone within fullerene cages, potentially leading to more efficient rocket propellants with up to 33% increased payload capacity — potentially a savings of millions per launch.
  • While the theoretical benefits are substantial, the practical application remains far off due to the immense computational resources required and unresolved technical hurdles.
  • If successful, quantum-assisted propellants could revolutionize rocket efficiency and reduce costs, but the technology is not yet ready for immediate implementation in the rocket-space industry.

The rocket-space industry is always on the lookout for ways to improve efficiency, reduce costs and push the boundaries of what’s possible in space exploration.

A recent study by HRL Laboratories and Boeing, published on the preprint server arXiv, explores a new approach that could lead to significant advancements in rocket propulsion by leveraging quantum computing. The focus is on stabilizing a high-energy-density molecule, cyclic ozone, within fullerene cages—a development that could dramatically enhance rocket fuel efficiency.

Cyclic ozone is an attractive candidate for rocket fuel due to its high energy density. However, its extreme reactivity has historically made it impossible to isolate and utilize effectively. The researchers propose that encapsulating cyclic ozone within fullerene cages, a type of carbon molecule, could stabilize the molecule and make it viable for use as a rocket propellant. This approach is similar to strategies previously considered for hydrogen storage and could potentially increase the specific impulse of rocket fuel—a measure of fuel efficiency—by up to 33%.

A 33% increase in specific impulse could translate to rockets carrying significantly more payload, thereby reducing the cost per launch and enhancing the overall efficiency of space missions. For instance, a SpaceX Falcon Heavy rocket, which currently can carry up to 63,800 kg to low Earth orbit (LEO), could potentially carry an additional 21,000 kg of payload if this technology were implemented. This would have profound implications for both commercial and scientific missions, offering more flexibility and capability at a lower cost.

Step aside, Zoom fatigue, VRTL wants to make virtual fan events fun again

VRTL founder Courtney Jeffries describes herself as a “recovering sports executive.”

“That was my entire career before I threw it away to chase down my startup dreams,” Jeffries told TechCrunch.

#vrtl #zoom #technology #courtneyjeffries #virtualfan

After playing softball at the University of Washington, she spent almost 20 years working in marketing and sales for teams like the Oakland Raiders and the New York Rangers. But while Jeffries was leading fan retention initiatives at Madison Square Garden, she noticed a glaring opportunity.

“My whole job was to focus on extracting the lifetime value out of the fans, but quite obviously, there’s an over indexing of attention on fans in the building,” she said. “The majority of fans are outside of an arena […] and there’s no platform, no way to scale in-person experiences that we know will trigger their loyalty.”

By 2022, Jeffries launched VRTL, an enterprise platform for entertainment companies — from sports teams to record labels — to capitalize on virtual fan experiences.

“It’s a very versatile platform that combines livestream, video chat, and then our proprietary suite of fan engagement experiences to drive those loyalties,” she said.

What makes VRTL, which pitched onstage today as part of the Startup Battlefield at TechCrunch Disrupt 2024, different from any other video chat or livestream service is not only that it gives clients valuable data, but also that it has proprietary fan engagement tools.

From Reddit:

By real-time I mean "seconds away".

I'm looking for a technology that would allow to watch any youtube video from common languages (eg: english, french, german, spanish) in other common languages. Possibly with seconds delay but that would not be a concern since we can delay the video rendering to sync audio/video.

I'm a nerd and deep into IT and AI but didn't took time to look into that particular field deeply.

Pixel phones already have "interpreter mode" - they will translate audio being spoken between two languages. They do this in real time like an interpreter would.

I imagine the additional challenge for what you're asking for is just the audio mixing. The only added step is that you would need to take the "background noise" and mix the new spoken audio back in.

I went on a trip to Norway this June, 10000km all the way to Nordkapp. I expected at least something to go wrong, and one of my rear brake calipers got stuck and I lost braking, Immediately stopped and somehow stopped right in front of a service.

With the help of my Pixel 7's interpreter mode I managed to talk to the mechanic just closing the shop and he looked at the smoking brake, told me to wait a few minutes for it to cool down and see if it works again, and it did.

At that moment I realized how much more of a hassle that would have been if not for my P7, truly we're at a level where we can travel anywhere and do anything without any help.... As long as there's internet connection lol

Noise cancelling headphones connected to your phone while your phone listens to the video from your pc. It's not pretty but it works

I work on dubbed translation software. Right now, our translation process is theoretically realtime (without lip syncing), taking about a minute per minute of audio. It just has some added setup time per each request that's mostly a scaling issue, and the website's UI is not actually set up to do realtime translation yet (instead it just takes in a video, translates the whole thing, and spits it out).

But realtime dubbed translation is absolutely possible with current tech, there's just some mostly non-ML-related barriers. u/perrochon is right about the legal issues. Realistically, what you'll more likely see soon is YouTubers adding multilingual audio tracks to their videos themselves as YouTube begins to roll out that feature more widely. If you know any YouTubers that might be interested, send them our contact :)

Some languages' grammar is backwards compared to English. So for them instantaneous translation would not be possible.

now. Ai is replacing translators AND interpreters in every legal and administrative space already. Humans hopefully will have literature as a last bastion for a few years still. Source: I’m a translator. The industry is in shambles. The worst aspect of it is not losing you job, but losing respect for your work, clients saying shit like: but the AI said something different when I translated it, and having to explain idioms to an idiot.

YouTube has had machine generation of subtitles and then machine translation of those subtitles for a decade. All it needs after that is text-to-speech which is not hard. Translation is much quicker than the necessary transcoding of the video stream.

The reasons it's not there on YouTube are manifold, and include product (many people hate dubbing and want subtitles) and legal (you are making another audio track with the copyrighted sounds, you are wholesale using someone else's video to make a new video, the owner of the original content may not like the auto generated new sound track, etc.)

I have this today on my Google pixel. Love it. I live half time US and other half Thailand

I don't think you need AI for this. Firefox is already beta testing auto website translation and you've been able to copy and paste text into google translate for years. If you're into to this and want to create it you just need a three step program.

Audio transcription

Text translation

Text to speech

That's it. If you're looking to incorporate AI then maybe something that samples the speakers voice on the fly and reuses it in step 3 would provide for a babblefish like experience rather than an automatic voiceover.

While you can do this without AI, AI can potentially offer some benefit. Primarily, it can learn to understand the context of what's being discussed and translate meaning more so than just words.

Translating words for words is one thing, but when a certain phrase has an unconventional meaning in one language when used in a specific context, and you translate the words to a different language, that meaning can be lost. Being able to use AI to understand those nuances, and hold context on an ongoing conversation, can potentially make the real-time translation much more meaningful (by translating the underlying message, not just the words).

You should be able to set that up with OpenAI's "realtime" version of the 4o model.

It costs like 20 cents a minute though...

Google translate does it on my pixel phone.

I was at the planetarium on Cozumel, and watched their video presentation with live subtitles courtesy of my phone. It wasn't perfect but good enough to follow what was going on.

I'm a professional interpreter, you won't be getting a service from AI like you can from me any time soon. Interpreting is human activity for humans and AI often confuses natural glitches in conversations as ends of sentences to name just one of its fundamental shortcomings.

I really dislike the current adoption of AI. Technically it works. But the quality is barely understandable most of the time.

But it seems this is good enough for a lot of people. Which is scary when people in charge see it as a worthwhile costcutting option.

The AI of tomorrow will be vastly different from the AI of today

Will you take my call whenever I feel like I need a translation for $20 dollars a month?

Do you know English, Italian, Korean, Japanese, Chinese, Portuguese, Spanish, German, Russian and Arabic?

There are actually some apps that do a pretty damn okay job at that already...They're getting better very rapidly...

It honestly wouldn't surprise me if you could get almost real time translation through an ear bud from your phone or other device within the next decade at the rate we're moving...

Assuming society doesn't implode before then...lol

Now it's just expensive and requires Internet connection

There are fundamental linguistic challenges to actual real time challenges. Take this German sentence from an article in Der Spiegel:

In den aktuellen Gesprächen werde es darum gehen müssen, die Notwendigkeit eines von der Regierung in Gänze getragenen wirtschaftspolitischen Konzeptes zu betonen.

google translate renders it as

The current talks will have to focus on emphasizing the need for an economic policy concept that is fully supported by the government.

In the English, “emphazing” is the 9th word. In the German, it comes from “zu betonen“, right at the end. You could not construct the full English translation until you had the full German sentence, right to the end. Fundamentally the structure of the languages are different in this way. This means the best you can hope for with on the fly translation is for a sentence to be translated after it is fully complete.

This exists!! I learned about this app a few months ago from someone on Reddit and tried it during a conference call with people who were speaking a different language and it worked (mostly) flawlessly. I fully understood the conversation in real time.

The app is called 3P0 and from what I understand it's from an independent dev who frequents Reddit. Pretty cool tech, honestly.

Never, because you don’t fully know what the person actually said until you hear the full sentence.

I've noticed there's increased interactions in certain apps between users of different languages thanks to quick translate buttons.

I think it's a positive thing that I have faith it.

Samsung phones do it with the Samsung earbuds. In real time.

Even if the computer was hyper advanced, and the algorithms have been perfectly trained.

It would never be possible because different languages have different sentence structures. Japanese sentence structure is nearly reverse of English, for instance.

So even a perfect translator would have to wait for them to finish their sentence before it could spit it out for you.

Hearing aids that can translate foreign languages - Really!

Originally introduced to the world in 2018, Livio AI hearing aids significantly altered how many individuals use and think of hearing aids. And why is that? Because Livio AI was the first multi-purpose hearing device that not only sounds better than other hearing aids out there, it also lets you track your brain and body health, stream music, phone calls and more from your smartphone, and translate languages as you hear them.

#translation #language #hearingaids #technology

How does the translation tool work?

Select the language you speak and then choose the language of the person you plan to engage in a conversation with.*

When you speak into your iPhone®, the Thrive Hearing app translates your speech and displays it on the screen in the other person’s language. Simply then show them the screen so they can see what you’re saying.

Alternatively, when the other person speaks into your phone, the app will translate their speech, display it in your language on the phone, and also stream the translated text to your hearing aids in your language.

Hi, @taskmaster4450le,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

Now, this same great technology has been updated to create an even better sounding hearing aid - Evolv AI. These outstanding hearing aids have the ability to translate 27 languages as you hear them! It’s easier to communicate with individuals who speak a different language - simply by using the translation tool in the Thrive Hearing app.

How Will Real-Time Translation Shape the Future of Language Learning?

With fast advancements in real-time translation and AI tools like video dubbing, I wonder if language learning will become less popular in the future.

Learning a new language takes a lot of time and effort. In 10-20 years, as technology reduces language barriers, will fewer people find it worth the investment? Could fluency in foreign languages become rarer as a result?

People who learn a language because they genuinely want to will likely continue to do so, but people who learn a language because they have to probably won't do it.

I'm curious to hear your thoughts. This is just a thought experiment—I believe language learning has many unique benefits that technology can't replace!

Old school analog face to face communication will always be welcome over the people who rely on the machines.

It'll be like Star Trek where the machine isn't even visible during a face to face conversation. People will just have their headphones popped in listening to the live translation. People already walk around with headphones in all the time

I can't imagine "real time" translation will truly happen, there will always be a delay of some sort.

(To clarify: when I think "real time translation", I'm thinking, like, an earpiece you wear that translates what somebody is saying as they're saying it, giving the impression that they are speaking your language)

Sentence structure varies so much between languages that this type of translation simply isn't feasible. For example: type a moderately complex sentance into DeepL (example) and watch how the translated sentence transforms wildly with nearly every new word.

For instance, the sentance "he has been taking care of his grandmother for twenty dollars an hour".

"he has" -> "tiene", as in has "he possesses"
"he has been" -> "ha sido". as in "he has been [a doctor]"
"he has been taking" -> "ha estado tomando" as in "he has been taking [medicine]"
etc etc.

Yeah, a real time would be hard with changes in syntax, not only things like noun/verb order would demand a delay, but bigger issues of syntax like SVO, SOV etc. It could get close to instantaneous, but never quite the speed of someone who just speaks the language.

Meaning gets lost in translation, anyone who speaks more than one language knows this. French poems translated to English don’t feel the same, English literature translated to French doesn’t feel the same.

People learn languages to feel other cultures, not simply understand them.

I don't think it will change much of anything. Unless we can forgo language entirely and just skip straight to transmitting direct meaning. THAT will be revolutionary in many many many ways. For good AND for bad....

It is just my opinion, but I don't think computer programs will ever be as good as humans. I spent an entire career as a software programs, so I know how programs work.

Programs are created by humans. Human language experts work out the grammar of a language, and they work with programmers who turn all the rules into numbers. No matter how smart the humans are, they are creating a set of rules in advance. The computer has zero intelligence. It can only run the rules (after they are turned into numbers) that the humans create. So nobody intelligent sees the actual sentence you are translating. A computer program is just following a large set of rules created by humans in the past.

For translation, you have the grammars of different languages. Only a human who is fluent in both languages can create the set of rules. And how many rules are needed? The good thing is that many different human experts can all contribute to the same set of rules. Once the rules are written down, they won't be forgotten. A good translation program is probably many man-years of work.

The real issue is this: can a set of rules translate every sentence correctly? Or are languages too complicated?

Don't worry. Won't be advanced enough in at least a couple decades. No need to stop studying.

And even then, literal translations will lose much of the nuance and cultural differences.

Loading...

If I know how to swim, I don’t need a swimming ring. If I love swimming, it doesn’t matter whether there are swimming rings.

Good "AI" translation is straight-up impossible because 1:1 translation is impossible, things can be conveyed in one language that there's no easy way to express in another and while there's ways for creative translators to still convey those ideas, it's something that "AI" simply can't do. "AI" translations are going to create more language learners if they catch on at all, because "AI" translations are so shallow and low quality (if not outright incomprehensible) that they force you to learn the language if you want to get the original point at all.

are people forgetting that actual humans, communities, and societies speak the languages that you learn ????? why learn a language for translation purposes and ONLY use it for translation purposes??? whatever happened to actually TALKING TO PEOPLE ....... what the hell

The thing that OP is referring to would be instant translation that DOES allow you to talk to people seemlessly and as easily as if you were talking to them directly.

I love learning languages and really hope AI doesn’t replace human translation, but I’m sure it will.

All these folks mentioning the nuances of languages and the impossibility of machines grasping the intricacies of human communication don’t seem recognise the sheer power and the barely imaginable future abilities of computers, I believe.

Hope I’m dead long before it happens, but I reckon AI’s coming for all you translators’ jobs!!

Already because of how prevalent English has become, and because of online translation tools, there is less of a need for translators.

The universities in my country have abandoned some of the smaller language choices and reduced the number of students they accept.

I do find it worrying. Although you might be able to communicate with all these people, then especially politically, it is important to be able to understand what is going on in countries in their own language. If you get it all in a filtered way, you are going to lose out on details and attitudes that might be important.

I think real-time translation will always be a bit laborious and won't really replace learning language learning

I think that needing to be comfortable in the written version of the language is already much less important. Translation and generative AI can already do the writing for you, good enough for 90% of situations.

But, I do think speaking is something that will still be an important skill to learn.

Imagine you're at a business networking event and the guy next to you has learnt English and can speak it himself, while you're trying to rely on some real-time translator tech. The other guy will have the edge.

Imagine going on a date or having a romantic relationship with someone where you only communicate by translation tech. It just doesn't seem the same.

I don’t think it’s gonna replace language learning soon but god I wish it’d happen soon to replace my language learning. Absolutely hate it, but have to. Real time face to face translation would also be great in eliminating code switching so I don’t have to bother to speak my native language ever again, that’d be a massive improvement in quality of life as well.

But yeah, I agree that language learning purely as a tool might get rarer, just like people don’t ride horses around as a means of transportation that commonly anymore; but riding a horse for fun and even for transportation too didn’t get replaced by the invention and proliferation of cars.

DGLegacy wants to help you ensure your loved ones inherit your assets

DGLegacy, a company that’s designed a digital legacy planning and inheritance app, presented today at TechCrunch Disrupt Startup Battlefield.

DGLegacy, a company that’s designed a digital legacy planning and inheritance app, pitched today at TechCrunch Disrupt 2024 Startup Battlefield to detail how it’s helping people ensure that their loved ones inherit their assets.

#dglegacy #asset #inheritance #technology

Founded by husband-and-wife duo Ana Mineva and Peter Minev, DGLegacy allows users to proactively inform their beneficiaries of their assets and ensure they are aware of their passwords and other information in order to claim them. The idea behind the app is to minimize the chance of an unclaimed asset.

Unlike traditional asset protection tools like trusts and wills, which can become outdated soon after their creation, DGLegacy lets you keep a continuously updated catalog of your assets and ensures that beneficiaries will always have access to it.

With DGLegacy, you can catalog your assets and upload relevant files to the respective asset. You can then invite beneficiaries and trustees to ensure that they will be informed about their designated assets.

The app features multi-layer encryption to ensure that all of the information is stored securely.

“The most important thing about DGLegacy is that it allows you to not only catalog very securely and easily your digital assets, but also has a proprietary protocol for detecting a fatal event,” Peter told TechCrunch. “Only when a fatal event is detected, then we trigger the digital inheritance.”

Radio station drops "Gen Z" AI presenters after a week following public outrage

On October 21, Radio Krakow announced that it was revamping its OFF station, introducing three AI-created voice hosts representing Generation Z.

In yet another example of how most people don't want AI replacing humans – no matter how much executives say it saves money – a Polish radio station has abandoned an experiment where its journalists were dismissed and replaced with AI "presenters." The test was supposed to last three months, but the station decided to end it after just a week following a massive backlash from the public.

#technology #genz #robots #ai #jobs #Humans

On October 21, Radio Krakow announced that it was revamping its OFF station, introducing three AI-created voice hosts representing Generation Z. These avatars were 20-year-old journalism student and pop culture expert Emilia Nowa, 22-year-old Acoustic Engineering student Jakub Zielinski, and 23-year-old former psychology student Alex, who is "socially engaged, passionately discussing topics related to identity [and] queer culture."

Response to the move was about as vitriolic as one would expect. Exacerbating the anger was the fact that Radio Kraków's human hosts were no longer at the station because they were "external collaborators" who had not had their contracts renewed, and "not because of AI," claimed editor-in-chief Marcin Pulit.

In an Amazon-level move of PR brilliance, the radio decided that the first thing the AIs should do is interview Wislawa Szymborska, the Nobel Prize-winning Polish poet and writer who died in 2012. This was, of course, another AI recreation, leading to even more outrage.

Most current AI tech is 90 percent marketing, says Linus Torvalds

Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality.

In a nutshell: Linus Torvalds never minces words when asked to comment on open-source support or the latest technology trends. The Finnish software engineer recently joined an open-source focused event, where he had a thing or two to say about AI technology and "intelligent" algorithms.

#newsonleo #ai #technology #hype #marketing #linustorvalds

Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland's Millennium Technology Prize for the creation of the Linux kernel, was interviewed during the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends.

The outspoken technologist said that modern generative AI services are an interesting development in machine learning technology, and that they will eventually change the world. At the same time, he expressed his dissatisfaction with the "hype cycle" which is fueling too many AI-related initiatives and contributing to Nvidia's impossibly high market evaluations.

Everyone and their dog is currently talking about AI, or sticking some AI-based cloud service together, or funding an AI-focused multi-million startup somewhere in the world. Torvalds hates the hype cycle so much that he doesn't even want to go there. The developer is essentially ignoring everything AI, though things will likely change in a drastic way a few years from now.

Linus Torvalds says AI will change the world but it is currently 90% marketing and 10% reality, and it will take another 5 years for it to become clear what AI is really useful for pic.twitter.com/6knFEfJbqf

– Tsarathustra (@tsarnick) October 21, 2024

Mostly hype and deception to trick investors. But at the same time they are developing in a rapid rate.

Apple Intelligence is now available on iPhone, iPad, and Mac

New Apple Intelligence features are now available on recent iPhone models, iPads, and Macs.

The big picture: Apple has launched its answer to the generative AI models and features offered by the likes of OpenAI, Google, and Microsoft. Although Apple Intelligence is now available to owners of compatible devices, many of its intended features are still slated for release in the coming months. Furthermore, leaked internal communications suggest that its capabilities currently fall significantly behind those of ChatGPT.

#apple #appleintelligence #ai #iphone #ipad #mac #technology

New Apple Intelligence features are now available on recent iPhone models, iPads, and Macs. Users can access the generative AI suite by updating the operating systems to iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.

Supported devices include the iPhone 15 Pro, all iPhone 16 models, and all devices with Apple M-series processors. Apple Intelligence currently only supports US English. The December update will add support for Australian, Canadian, Irish, New Zealand, South African, and UK English. Beginning in April and continuing throughout 2025, Apple will expand support to Chinese, Indian English, French, German, Spanish, and additional languages.

With the Writing Tools function, Apple Intelligence can edit, summarize, and rewrite text in Mail, Notes, Pages, Messages, and other apps. It can proofread, make alterations, and explain its editing choices to enhance users' writing, similar to Grammarly. Moreover, selected text can be condensed into bullet points, lists, or tables.

The summarization feature also works on notifications, long message chains, and recorded phone calls. When Apple Intelligence begins recording, all participants are immediately notified.

There are 3 pages
Pages