The TC39 committee, which oversees JavaScript standards, advanced three JavaScript proposals to Stage 4 at its February meeting. Evolving to stage four means the proposals are ready to become part of the ECMAScript standard, which is the standard for JavaScript.
Sarah Gooding, head of content marketing at Socket, reported the JavaScript updates on the software security company’s blog. Advancing to stage four were the following proposals:
- Float16Array introduces a new typed array to handle 16-bit floating-point numbers (float16) in JavaScript. “This addition would complement existing typed arrays like Float32Array and Float64Array, providing a more memory-efficient option for applications where full 32-bit or 64-bit precision isn’t necessary,” Gooding wrote.
- Redeclarable Global eval Variables simplifies JavaScript’s handling of global variables introduced via eval. “Currently, variables declared with var inside a global eval are configurable properties, yet redeclaring them using let or const results in an error,” Gooding explained. “This proposal seeks to allow such redeclarations, streamlining the language’s behavior and reducing complexity for developers.”
- RegExp Escaping introduces a RegExp.escape function to JavaScript. “This function allows developers to escape special characters in strings, enabling their safe incorporation into regular expressions without unintended interpretations,” Gooding said. It’s been a recognized need for years, she added.
JetBrains Team Assesses AI on Kotlin Knowledge
Large language models are good at discussing Kotlin and can answer questions about it, but their knowledge is incomplete and can even be outdated, warns a recent analysis of AI and Kotlin.
As if that weren’t problematic enough, artificial intelligence is also prone to typical large language model errors such as miscounting or losing context, writes software developer Vera Kudrevskaia on JetBrains’ Kotlin blog.
JetBrains Research tested commonly used AI model, including DeepSeek-R1, OpenAI 01 and OpenAI 03-mini, using a new benchmark the team created for evaluating Kotlin-related questions.
“We looked at how they perform overall, ranked them based on their results, and examined some of DeepSeek’s answers to real Kotlin problems in order to give you a clearer picture of what these models can and can’t do,” Kudrevskaia said. ”Our evaluation showed that the latest OpenAI models and DeepSeek-R1 are the best at working with Kotlin code, with DeepSeek-R1 having an advantage in open-ended questions and reasoning.”
The research team also did a code test of DeepSeek that’s worth reviewing.
Overall, the results show that a model can be more adept at a language than other, similar models. But there are other factors that come into play, such as a model’s speed.
So, programmer beware.
Those who have found incorrect or surprising LLM responses are invited to share them in the Kotlin public slack or post to the blog’s comments section.
OpenAI Releases Research Preview of GPT-4.5
OpenAI released a research preview of GPT-4.5, which the company calls its largest and best model for chat.
“GPT‑4.5 is a step forward in scaling up pre-training and post-training,” the company said in a blog post introducing the new model. “By scaling unsupervised learning, GPT‑4.5 improves its ability to recognize patterns, draw connections, and generate creative insights without reasoning.”
It’s broader knowledge base, improved ability to understand user intent and a greater “EQ” (emotional quotient) makes it better at writing, programming and solving practical problems, the company claimed. GPT-4.5 will engage in warmer, more intuitive and natural flowing conversations, they added.
Perhaps more significantly, the team said it may hallucinate less. Regardless, the research preview will help OpenAI better understand its strengths and limitations.
“We’re still exploring what it’s capable of and are eager to see how people use it in ways we might not have expected,” the team wrote.”
In addition to the blog post, there’s an approximately 13-minute video introduction to GPT-4.5.
Next.js 15.2 Updates TurboPack, Debugging
Next.js 15.2 released Wednesday with updates for redesigned debugging experience, metadata and TurboPack.
In essence, the Next.js 15.2 team has redesigned its error UI and improved stack traces to improve the debugging experience.
Also with this release, async metadata will no longer block page rendering or client-side page transitions, thanks to the introduction of streaming metadata.
Thanks to Turbopack performance improvements, users should also experience faster compile times and reduced memory usage. Early adopters have reported up to 57.6% faster compile times when accessing routes compared to Next.js 15.1, the team noted. Vercel also saw a 30% decrease in memory usage during local development.
“With these improvements, Turbopack should now be faster than Webpack in virtually all cases,” the team noted. “If you encounter a scenario where this isn’t true for your application, please reach out — we want to investigate these.”
Finally, Next.js 15.2 introduces experimental support for React’s new View Transitions API and the Node.js runtime in middleware.
The post Three JavaScript Proposals Advance to Stage 4 appeared first on The New Stack.
Comments are closed.