Wednesday, July 10, 2024

An even faster Microsoft Edge

An even faster Microsoft EdgeAn even faster Microsoft Edge

Over the past month, you may have noticed that some of Edge’s features have become faster and more responsive. That’s because Edge is on a journey to make all user interactions in the browser blazing fast starting with some of our newest features and core features.

Starting with Edge 122, the Browser Essentials UI is now much more responsive. The UI is now 42% faster for Edge users and a whopping 76% faster for those of you on a device without an SSD or with less than 8GB RAM!

An even faster Microsoft Edge

Written BypublishedMay 28, 2024

Over the past month, you may have noticed that some of Edge’s features have become faster and more responsive. That’s because Edge is on a journey to make all user interactions in the browser blazing fast starting with some of our newest features and core features.

Starting with Edge 122, the Browser Essentials UI is now much more responsive. The UI is now 42% faster for Edge users and a whopping 76% faster for those of you on a device without an SSD or with less than 8GB RAM!

Favorites is another Edge feature that’s getting UI responsiveness improvements in Edge 124. Whether favorites are expanded or collapsed, the experience should be 40% faster.

And this is just the tip of the iceberg. Over the coming months we will continue to ship responsiveness improvements to many more Edge features including history, downloads, wallet and more.

We’d love for you to try Microsoft Edge and let us know what you think. Tell us about your experience by using the feedback tool in Edge: click Settings and more (…) > Help and feedback > Send feedback.

Read on for more details on how we made this all possible.

Monitoring UI responsiveness

Edge’s UI responsiveness improvements started with understanding what you, our users, were experiencing. Edge monitors its UI responsiveness via telemetry collected from end users’ machines.  We intentionally did this collection for all the parts of the Edge UI, not just for the web pages that we render.  What did we learn from this data?

  • Research indicates that there are certain absolute responsiveness targets that must be met for a user to perceive the UI as fast, and data showed our UI could be more responsive.
  •  We had an opportunity to improve responsiveness for lower resourced devices.

We are constantly learning more about how we can improve the performance of the Edge UI and, by using this data, we discovered some areas of improvement.  For example, we observed that the bundles of code that many of our components used were too large. We realized that this was due to two main reasons:

  1. The way we organized the UI code in Edge wasn’t modular enough. Teams who worked on different components shared common bundles even when that wasn’t strictly necessary. This resulted in one part of the UI code slowing down another part by sharing things unnecessarily.
  2. A lot of our code was using a framework that relied on JavaScript to render the UI.  This is referred to as client-side rendering, which has been a popular trend amongst web developers over the past decade because it helped web developers be more productive and enabled more dynamic UI experiences.

Rendering web UI like it was meant to be

Why are we sharing this ancient news? After all, a lot of web pages have been rendering on the client-side for years. Well, it turns out that JavaScript must be downloaded, then run through a JIT compiler (even if you don’t use it), and then executed, and all this must be done before any of the JavaScript can start rendering the UI. This introduces a lot of delay before users can see the UI, especially on low-end devices.

If you turn back the time machine prior to the Web 2.0 era, the way web content was rendered was by using HTML and CSS.  This is often referred to as server-side rendering, as the client gets the content in a form that’s ready to render. Modern browser engines are very fast at rendering this content so long as you don’t let JavaScript get in the way.

Based on this realization, our questions became:

  1. Could we maintain the developer productivity that JavaScript frameworks have given us while generating code that renders UI fast?
  2. Could the browser be its own best customer?
  3. How fast could we make things if we removed that framework and built our UI just by using the web platform?

The answers to these questions are YesYes, and Very Fast.

Introducing WebUI 2.0

The result of this exercise is an Edge internal project that we’ve called WebUI 2.0.

In this project, we built an entirely new markup-first architecture that minimizes the size of our bundles of code, and the amount of JavaScript code that runs during the initialization path of the UI. This new internal UI architecture is more modular, and we now rely on a repository of web components that are tuned for performance on modern web engines.  We also came up with a set of web platform patterns that allow us to ship new browser features that stay within our markup-first architecture and that use optimal web platform capabilities.

Browser Essentials is the first Edge feature which we converted to test the new architecture and to prove that the concept worked, especially on all types of devices. We’re in the process of upgrading components of the Edge user interface to WebUI 2.0 and you can expect to see more features of the browser getting far more responsive over time.

We hope that more websites start moving in this direction of markup-first, small bundles, and less UI-rendering JavaScript code. Over time, we plan on making some of our packages open source, so that all developers can benefit. Finally, as we continue improving WebUI 2.0, we’re committed to finding opportunities to improve the web platform itself even more.

We hope you enjoy this upgraded Edge experience!

Researchers at ETH Zurich develop the fastest possible flow algorithmCOMPUTER AND INFORMATION TECHNOLOGYRESEARCHRasmus Kyng has written the near-perfect algorithm. It computes the maximum transport flow at minimum cost for any kind of network – be it rail, road or electricity – at a speed that is, mathematically speaking, impossible to beat.

“Derivative products include any output derived from Stability AI's Foundational models, such as fine-tuned models or other creative outputs,” a spokesperson from Stability AI told Decrypt. “Examples of derivative works include SD3 fine-tunes, LoRA fine-tunes, adapters etc. and these can also be trained with SD3 output images.”

The license also says that “you are the owner of derivative works you create, subject to Stability AI’s ownership of the Stability AI materials and any derivative works made by or for Stability AI.” In other words, as long as those boundaries are respected, fine-tuning and profiting from it should not be against the terms and conditions.

“To safeguard our IP, it is not permitted to train new foundational AI models using SD3 outputs as training data, and all activity must adhere to our acceptable use policies,” the company spokesperson told Decrypt.Researchers at ETH Zurich develop the fastest possible flow algorithmCOMPUTER AND INFORMATION TECHNOLOGYRESEARCHRasmus Kyng has written the near-perfect algorithm. It computes the maximum transport flow at minimum cost for any kind of network – be it rail, road or electricity – at a speed that is, mathematically speaking, impossible to beat.

IEEE Symposium on Foundations of Computer Science (FOCS) 2024. external pagehttps://focs.computer.org/2024/accepted-papers-for-focs-2024/

Chen, L, Kyng, R, Liu, YP, Meierhans, S, Probst Gutenberg, M. Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow. Proceedings of the 56th Annual ACM Symposium on Theory of Computing, June 2024 (STOC 2024). doi: external pagehttps://doi.org/10.1145/3618260.3649745.

Chen, L, Kyng, R, Liu, YP, Peng, R, Probst Gutenberg, M, Sachdeva, S, Kyng, R. Maximum Flow and Minimum-Cost Flow in Almost-Linear Time. 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), Denver, CO, USA, 2022. doi: external page10.1109/FOCS54457.2022.00064.

Stable Diffusion 3 License Revamped Amid Blowback, Promising Better ModelStability AI released a more permissive license for Stable Diffusion 3 amid recent controversy, but does it go far enough?

Stable Diffusion 3 License Revamped Amid Blowback, Promising Better Model

Stability AI released a more permissive license for Stable Diffusion 3 amid recent controversy, but does it go far enough?


In brief

  • Stable Diffusion users pushed back last month after Stability AI launched restrictive new licensing terms.
  • The firm announced late Friday that it has relaxed its conditions, to mixed reactions from users.
  • Users still cannot create a new foundational model by training it on SD3-generated work.

Stability AI unveiled a revamped Community License for its Stable Diffusion 3 (SD3) model, aiming to quell the firestorm of controversy that erupted following the initial release. The company's move comes on the heels of a ban by CivitAI, a major community hub, which had barred all SD3-related content due to licensing concerns.

“We acknowledge that our latest release, SD3 Medium, didn’t meet our community’s high expectations,” Stability said in an announcement late Friday. “We heard you and have made improvements to address your concerns and to continue to support the open-source community."

Under the new terms, Stability AI grants free use of SD3 for research, non-commercial, and limited commercial purposes. The license also allows individuals and businesses with annual revenues under $1 million to use the model without charge. Those exceeding this threshold must obtain a paid enterprise license.

In an interview with Stability, the company confirmed that it is OK to create custom SD3 models and improve on top of the base SD3. However, it’s forbidden to develop a new foundational model using images generated with SD3 as part of its training dataset—that is, training a Stable Diffusion competitor using material from the original model.

“Derivative products include any output derived from Stability AI's Foundational models, such as fine-tuned models or other creative outputs,” a spokesperson from Stability AI told Decrypt. “Examples of derivative works include SD3 fine-tunes, LoRA fine-tunes, adapters etc. and these can also be trained with SD3 output images.”

The license also says that “you are the owner of derivative works you create, subject to Stability AI’s ownership of the Stability AI materials and any derivative works made by or for Stability AI.” In other words, as long as those boundaries are respected, fine-tuning and profiting from it should not be against the terms and conditions.

“To safeguard our IP, it is not permitted to train new foundational AI models using SD3 outputs as training data, and all activity must adhere to our acceptable use policies,” the company spokesperson told Decrypt.



Semiconductor Recycling: Addressing E-Waste Challenges

Semiconductor Recycling: Addressing E-Waste Challenges The increasing demand for electronic devices, from smartphones to electric cars, has ...