Developers – Linux.com https://www.linux.com News For Open Source Professionals Thu, 15 Feb 2024 15:18:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck https://www.linux.com/news/linux-6-8-brings-more-sound-hardware-support-for-intel-amd-including-the-steam-deck/ Tue, 16 Jan 2024 01:54:29 +0000 https://www.linux.com/?p=585706 Waiting for pulling into the mainline kernel once Linus Torvalds is back online following Portland’s winter storms is the sound subsystem updates for Linux 6.8, which include a lot of new sound hardware support. Linux sound subsystem maintainer Takashi Iwai at SUSE describes the new sound hardware support for Linux 6.8 as: “Support for more […]

The post Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck appeared first on Linux.com.

]]>
Waiting for pulling into the mainline kernel once Linus Torvalds is back online following Portland’s winter storms is the sound subsystem updates for Linux 6.8, which include a lot of new sound hardware support.

Linux sound subsystem maintainer Takashi Iwai at SUSE describes the new sound hardware support for Linux 6.8 as:

“Support for more AMD and Intel systems, NXP i.MX8m MICFIL, Qualcomm SM8250, SM8550, SM8650 and X1E80100”

Read more at Phoronix

The post Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck appeared first on Linux.com.

]]>
Linux Foundation Newsletter: October 2023 https://www.linux.com/news/linux-foundation-newsletter-october-2023/ Thu, 19 Oct 2023 13:46:11 +0000 https://www.linux.com/?p=585651 This month’s newsletter will be one of our biggest ever! In October, our communities met in person at the Open Source Summit Europe in Bilbao and KubeCon + CloudNativeCon + OSS in Shanghai, China.  At OpenSSF’s Secure Open Source Summit in Washington, DC, we continued advancing important conversations to improve the security of software supply […]

The post Linux Foundation Newsletter: October 2023 appeared first on Linux.com.

]]>

This month’s newsletter will be one of our biggest ever! In October, our communities met in person at the Open Source Summit Europe in Bilbao and KubeCon + CloudNativeCon + OSS in Shanghai, China.  At OpenSSF’s Secure Open Source Summit in Washington, DC, we continued advancing important conversations to improve the security of software supply chains. We had a record month at LF Research, with four new reports published since our last newsletter on brand new topics, including the mobile industry and Europe’s public sector, and year-over-year trends specific to European open source and the state of the OSPO. And, of course, there’s lots of project news for you to catch up on, including the announcement of OpenPubkey, a zero-trust passwordless authentication system for Docker.

Read the October Newsletter at the Linux Foundation Blog

The post Linux Foundation Newsletter: October 2023 appeared first on Linux.com.

]]>
Open Mainframe Summit Call for Papers Now Open https://www.linux.com/news/open-mainframe-summit-call-for-papers-now-open/ Wed, 07 Jun 2023 13:00:00 +0000 https://www.linux.com/?p=585473 Open Mainframe Project announces Co-Located Events with IBM TechXchange in September and Open Source in Finance Forum in November SAN FRANCISCO, June 7, 2023 – The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, today announced the launch of the Call for […]

The post Open Mainframe Summit Call for Papers Now Open appeared first on Linux.com.

]]>

Open Mainframe Project announces Co-Located Events with IBM TechXchange in September and Open Source in Finance Forum in November

SAN FRANCISCO, June 7, 2023 The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, today announced the launch of the Call for Proposals (CFPs) for the 4th annual Open Mainframe Summit. This year, the premier mainframe event will be co-located with two industry conferences – IBM TechXchange Conference 2023, hosted in Las Vegas on September 11-14; and Open Source in Finance Forum, hosted in New York City on November 1. 

“As mainframe technology and events evolve and mature, it becomes a more natural evolution to align Open Mainframe Projects and activities with other industry events,” said John Mertic, Director of Program Management at the Linux Foundation and Executive Director of the Open Mainframe Project. “This year, by partnering with IBM and FINOS, we are offering attendees the opportunity to enhance their experience with unique presentations and targeted conversations with industry experts.” 

“As open source becomes the default development model for many enterprises, mainframe customers are looking to embrace community developed code for their mainframe environments,” said Steven Dickens, Vice President and Practice Leader at the Futurum Group. “The Open Mainframe Project has established itself as the go-to community for mainframe developers, enterprises and vendors alike.  The events announced today are a key part of how the community will gather to advance code on the mainframe.” 

Open Mainframe Summit aims to connect and inform all those interested in growing the use of mainframes and related technology in dynamic technical and educational sessions. It is open to students, developers, corporate leaders, users and contributors of projects from around the globe looking to learn, network and collaborate. It will feature content tracks that tackle both business and technical strategies for enterprise development and deployment.

Open Mainframe Summit – Las Vegas

IBM TechXchange Conference offers technical breakout sessions, hands-on experiences, product demonstrations, instructor-led labs, and certifications tailored to your interests and learning style. Open Mainframe Summit will be featured as part of the TechXchange Community Day on September 11. Community Day unites diverse IBM user groups and technical communities to foster collaboration, networking and learning. Learn more here

Open Mainframe Summit – New York

Open Source in Finance Forum is dedicated to driving collaboration and innovation in financial services through open source software and standards. The event brings together experts across financial services, technology, and open source to engage our community in stimulating and thought-provoking conversations about how to best (and safely) leverage open source software to solve industry challenges. Open Mainframe Summit will be featured as part of a 6-session track and a 10-minute keynote presentation. Learn more about the event here

Submit a Proposal

The Call for Proposals is now open and will be accepting submissions until Friday, June 30, 2023. Interested speakers for either event can submit proposals with options for 20 minute talks, 30-minute sessions, 60-minute panel discussions or a 60-minute workshop or lab. All topics that benefit the Open Mainframe ecosystem are welcome and can include (but not limited to) AI, machine learning, building the next workforce, cloud native, COBOL, Java, hybrid cloud, diversity and Inclusion, z/OS and Linux on Z, and security. 

Submit a proposal: http://cfp.openmainframesummit.org/

Meet the Program Committee

A program committee, which includes active community members and project leaders, will review and rate the proposals. Open Mainframe Project welcomes Alan Clark, CTO Office and Director for Industry Initiatives, Emerging Standards and Open Source at SUSE, Donna Hudi, Chief Marketing Officer at Phoenix Software International, Elizabeth K. Joseph, Global Head of the OSPO for IBM zSystems at IBM, Rose Sakach, Offering Manager, Mainframe Division at Broadcom, Inc., and Len Santalucia, CTO at Vicom Infinity, A Converge Company.  

We encourage community leaders, creators, developers, implementers, and users to submit presentations.  Whether you are a seasoned presenter or a first-time speaker we welcome your submissions.  While we expect a key focus on work within the Open Mainframe Project’s 21 hosted projects/working groups, user experiences and tips and tricks are often some of the favorite sessions of attendees.

For more details about Open Mainframe or to watch the videos for Open Mainframe Summit 2022, check out the Open Mainframe Project 2022 Annual Report

For more about Open Mainframe Project, visit https://www.openmainframeproject.org/.  

About the Open Mainframe Project

The Open Mainframe Project is intended to serve as a focal point for deployment and use of Linux and Open Source in a mainframe computing environment. With a vision of Open Source on the Mainframe as the standard for enterprise class systems and applications, the project’s mission is to build community and adoption of Open Source on the mainframe by eliminating barriers to Open Source adoption on the mainframe, demonstrating value of the mainframe on technical and business levels, and strengthening collaboration points and resources for the community to thrive. Learn more about the project at https://www.openmainframeproject.org.

About The Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, PyTorch, RISC-V, SPDX, OpenChain, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Maemalynn Meanor

The Linux Foundation

maemalynn@linuxfoundation.org

The post Open Mainframe Summit Call for Papers Now Open appeared first on Linux.com.

]]>
Creating a ‘Minimum Elements’ SPDX SBOM Document in 5 Minutes https://www.linux.com/news/creating-a-minimum-elements-spdx-sbom-document-in-5-minutes/ Wed, 03 May 2023 15:20:22 +0000 https://www.linux.com/?p=585383 The rise in cyberattacks and software’s critical role in our lives has brought to light the need for increased transparency and accountability in the software supply chain. Software distributors can achieve this by providing software bills of materials (SBOMs), which provide a comprehensive list of all the components used in a software product, including open […]

The post Creating a ‘Minimum Elements’ SPDX SBOM Document in 5 Minutes appeared first on Linux.com.

]]>

The rise in cyberattacks and software’s critical role in our lives has brought to light the need for increased transparency and accountability in the software supply chain. Software distributors can achieve this by providing software bills of materials (SBOMs), which provide a comprehensive list of all the components used in a software product, including open source and proprietary code, libraries, and dependencies.

In May 2021, United States Executive Order 14028 on improving the nation’s cybersecurity emphasized the importance of SBOMs in protecting the software supply chain. After comprehensive proof of concepts using the Software Package Data Exchange format (SPDX), the National Telecommunications and Information Administration (NTIA) released the “minimum elements” for an SBOM. The minimum elements require data fields that enable basic use cases:

  • Supplier Name
  • Component Name
  • Version of the Component
  • Other Unique Identifiers
  • Dependency Relationship
  • Author of SBOM Data
  • Timestamp

The NTIA recommends that the data contained in these fields should be expressed in predictable implementations and data formats to enable automation support. One of the preferred formats for expressing this data is SPDX. While version 2.3 of the SPDX specification, released in November 2022, was the first version to explicitly describe how to express the NTIA minimum elements in an SPDX document, SPDX has supported these elements since its version 2.0 release in 2015.

Read more about how to create an SPDX SBOM document that complies with the NTIA “minimum elements” at The New Stack.

The post Creating a ‘Minimum Elements’ SPDX SBOM Document in 5 Minutes appeared first on Linux.com.

]]>
Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia https://www.linux.com/news/multiculturalism-in-technology-and-its-limits-asyncapi-and-the-long-road-to-open-source-utopia-2/ Mon, 24 Apr 2023 21:13:17 +0000 https://www.linux.com/?p=585343 Image “Open Source Utopia” by Jason Perlow, Bing Image Creator “Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections – and it matters which ones get made and unmade.” ¬Donna J. Haraway The body is the best and the only tool humans […]

The post Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia appeared first on Linux.com.

]]>
"Open Source Utopia" by Jason Perlow, Bing Image Creator

Image “Open Source Utopia” by Jason Perlow, Bing Image Creator

“Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections – and it matters which ones get made and unmade.” ¬Donna J. Haraway

The body is the best and the only tool humans have for life; it is the physical representation of who we are, the container in which we move and represent ourselves. It reflects our identity, the matter that represents us socially.

Humans have differentiated themselves from other animals by creating tools, using elements that increase their physical and mental capacities, extending their limits, and mediating how they see and understand the world. The body is, thus, transfixed and intermediated by technology.

In the contemporary era, technological progress has led to global interconnection. Global access to the Internet has become the main propeller of globalization, a democratizing and liberating weapon.

It is a place where the absence of corporeality manages to resituate us all at the same level. It is a pioneering experience in which the medium can favor equality. It offers a space of representation in which anonymity and the absence of gender and ethnic, and cultural constraints facilitate equal opportunities.

A temporary autonomous zone

The absence of a previous reference of a historical past turned the Internet into a “temporary autonomous zone.” Thus, a new space was constituted where identities could be expressed and constructed freer. In this way, the Internet has provided oppressed collectives and communities with a means of alleviating cultural and gender biases in which people express themselves free of socio-political pigeonholing.

This same idea can be extrapolated to the new workspaces within technology. The modern workshop is on the network and is interconnected with colleagues who live in any corner of the world. This situation leads us to remote teamwork, multiculturalism, and all the positive aspects of this concept, creating diverse and heterogeneous teams where nationalities, ethnicities, and backgrounds are mixed.

In this idyllic world of the liberation of identities and construction of new spaces to inhabit, the shadows of the physical world, with a dense and unequal past, creep in. Open source projects have faced all these opportunities and constraints in the last years, trying to achieve the goals expressed within the heroic times of the internet in the ’90s.

Opening doors: For whom? For all?

AsyncAPI is an open source initiative sustained and driven by its community. It is a free project whose objective is to be made up of everyone who wants to participate. It follows the basic idea of being created by everyone for everyone.

Being part of the initiative is simple: join the Slack channel and contribute through GitHub. People join freely and form a team managing to take this project to a high level.

But all freedom is conditioned by the context and the system surrounding it. At this point, AsyncAPI as a project shows its limitations and feels constrained. Talking about an open, inclusive, and enthusiastic community is a start. 

There is no widespread access and literacy to technology in all geographical and social contexts. Potentially and hypothetically, the doors are open, as are the doors to libraries. That does not mean that everyone will enter them. The clash against the glass ceiling makes up the technology field, specifically in software development. This conflict emerges from the difficulties of having a multicultural community rich in gender or ethnic identities and equality due to the limitations of the field.

In 2019 the number of software developers worldwide grew to 23.9 million and was expected to reach 28.7 million software engineers by 2024. In these promising numbers, there are huge inequalities. The majority of developers come from specific world areas, and women represent only 10% of the total.

Towards a utopian future: Let’s try it!

The data shows us that beyond the democratizing possibilities of the Internet, most of the advances are only hypothetical and not real. We can see approximately the same numbers reflected in the AsyncAPI community. The community knows what is happening and wants to reverse this situation by being more heterogeneous and multicultural. That’s a challenge that many factors influence.

AsyncAPI has grown in all directions, tackling this situation and creating an ecosystem that embraces variety. It comprises a community of almost 2,000 people of more than 20 nationalities from diverse cultures, ethnicities, and backgrounds.

AsyncAPI was born as an open source initiative, a liberating software model in every sense, a code made by all and for all. It is not a model closed exclusively to the technological field but a movement with a solid ethical base that crosses screens and shapes principles. That is why AsyncAPI is committed to this model. No matter how many external factors are against it, there is a clear direction. 

The decisions taken now will be vital to building a better future – a freer and more inclusive one –. We do not want a unidirectional mirror where only some can see themselves reflected. The key is to search for a diverse and multifaceted mirror.

Aspiring to form a community that is a melting pot of cultures and identities may seem somewhat utopian, but we believe it is a worthy goal to keep in mind and for which to strive. Proposals are welcome. Minds, eyes, and ears always remain open. Let us at least try it. 

Barbaño González

The post Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia appeared first on Linux.com.

]]>
Introducing self-service SPDX SBOMs https://www.linux.com/news/introducing-self-service-sboms/ Wed, 29 Mar 2023 23:02:23 +0000 https://www.linux.com/?p=585274 Following the precedent set by Executive Order 14028, security and compliance teams increasingly request software bills of materials (SBOMs) to identify the open source components of their software projects, assess their vulnerability to emerging threats, and verify alignment with license policies. So, we asked ourselves, how do we make SBOMs easier to generate and share? Read […]

The post Introducing self-service SPDX SBOMs appeared first on Linux.com.

]]>

Following the precedent set by Executive Order 14028, security and compliance teams increasingly request software bills of materials (SBOMs) to identify the open source components of their software projects, assess their vulnerability to emerging threats, and verify alignment with license policies. So, we asked ourselves, how do we make SBOMs easier to generate and share?

Read the rest at the GitHub blog

The post Introducing self-service SPDX SBOMs appeared first on Linux.com.

]]>
PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever https://www.linux.com/news/pytorch-2-0-our-next-generation-release-that-is-faster-more-pythonic-and-dynamic-as-ever/ Thu, 23 Mar 2023 01:10:19 +0000 https://www.linux.com/?p=585253 We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed. This next-generation release includes a Stable […]

The post PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever appeared first on Linux.com.

]]>
We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.

This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.

Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog.

This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.

Summary:

  • torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
  • As an underpinning technology of torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on OpenAI Triton deep learning compiler to generate performant code and hide low level hardware details. OpenAI Triton-generated kernels achieve performance that’s on par with hand-written kernels and specialized cuda libraries such as cublas.
  • Accelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator.
  • Metal Performance Shaders (MPS) backend provides GPU accelerated PyTorch training on Mac platforms with added support for Top 60 most used ops, bringing coverage to over 300 operators.
  • Amazon AWS optimizes the PyTorch CPU inference on AWS Graviton3 based C7g instances. PyTorch 2.0 improves inference performance on Graviton compared to the previous releases, including improvements for Resnet50 and Bert.
  • New prototype features and technologies across TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
StableBetaPrototypePerformance Improvements
Accelerated PT 2 Transformerstorch.compileDTensorCUDA support for 11.7 & 11.8 (deprecating CUDA 11.6)
PyTorch MPS BackendTensorParallelPython 3.8 (deprecating Python 3.7)
Scaled dot product attention2D ParallelAWS Graviton3
functorchTorch.compile (dynamic=True)
Dispatchable Collectives
Torch.set_default & torch.device
X86 quantization backend
GNN inference and training performance

*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click here.

STABLE FEATURES

[Stable] Accelerated PyTorch 2 Transformers

The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference “fastpath” architecture, previously known as “Better Transformer.”

Similar to the “fastpath” architecture, custom kernels are fully integrated into the PyTorch Transformer API – thus, using the native Transformer and MultiHeadAttention API will enable users to:

  • transparently see significant speed improvements;
  • support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and
  • continue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.

To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported (see below), with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. Accelerated PyTorch 2 Transformers are integrated with torch.compile() . To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with model = torch.compile(model).

We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile().

alt_text Figure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.

BETA FEATURES

[Beta] torch.compile

torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.

Underpinning torch.compile are new technologies – TorchDynamo, AOTAutograd, PrimTorch and TorchInductor:

  • TorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks and is a significant innovation that was a result of 5 years of our R&D into safe graph capture.
  • AOTAutograd overloads PyTorch’s autograd engine as a tracing autodiff for generating ahead-of-time backward traces.
  • PrimTorch canonicalizes ~2000+ PyTorch operators down to a closed set of ~250 primitive operators that developers can target to build a complete PyTorch backend. This substantially lowers the barrier of writing a PyTorch feature or backend.
  • TorchInductor is a deep learning compiler that generates fast code for multiple accelerators and backends. For NVIDIA and AMD GPUs, it uses OpenAI Triton as a key building block. For intel CPUs, we generate C++ code using multithreading, vectorized instructions and offloading appropriate operations to mkldnn when possible.

With all the new technologies, torch.compile is able to work 93% of time across 165 open-source models and runs 20% faster on average at float32 precision and 36% faster on average at AMP precision.

For more information, please refer to https://pytorch.org/get-started/pytorch-2.0/ and for TorchInductor CPU with Intel here.

[Beta] PyTorch MPS Backend

MPS backend provides GPU-accelerated PyTorch training on Mac platforms. This release brings improved correctness, stability, and operator coverage.

MPS backend now includes support for the Top 60 most used ops, along with the most frequently requested operations by the community, bringing coverage to over 300 operators. The major focus of the release was to enable full OpInfo-based forward and gradient mode testing to address silent correctness issues. These changes have resulted in wider adoption of MPS backend by 3rd party networks such as Stable Diffusion, YoloV5, WhisperAI, along with increased coverage for Torchbench networks and Basic tutorials. We encourage developers to update to the latest macOS release to see the best performance and stability on the MPS backend.

Links

  1. MPS Backend
  2. Developer information
  3. Accelerated PyTorch training on Mac
  4. MetalMetal Performance Shaders & Metal Performance Shaders Graph

[Beta] Scaled dot product attention 2.0

We are thrilled to announce the release of PyTorch 2.0, which introduces a powerful scaled dot product attention function as part of torch.nn.functional. This function includes multiple implementations that can be seamlessly applied depending on the input and hardware in use.

In previous versions of PyTorch, you had to rely on third-party implementations and install separate packages to take advantage of memory-optimized algorithms like FlashAttention. With PyTorch 2.0, all these implementations are readily available by default.

These implementations include FlashAttention from HazyResearch, Memory-Efficient Attention from the xFormers project, and a native C++ implementation that is ideal for non-CUDA devices or when high-precision is required.

PyTorch 2.0 will automatically select the optimal implementation for your use case, but you can also toggle them individually for finer-grained control. Additionally, the scaled dot product attention function can be used to build common transformer architecture components.

Learn more with the documentation and this tutorial.

[Beta] functorch -> torch.func

Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:

We’re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work. Please see the docs and the migration guide for more details.

Furthermore, we have added support for torch.autograd.Function: one is now able to apply function transformations (e.g. vmap, grad, jvp) over torch.autograd.Function.

[Beta] Dispatchable Collectives

Dispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. “NCCL” or “GLOO”). Existing backend specifications by users will be honored and will not require change.

Usage example:

import torch.distributed.dist
…
# old
dist.init_process_group(backend=”nccl”, ...)
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors does not work

# new
dist.init_process_group(...) # backend is optional
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors works

Learn more here.

[Beta] torch.set_default_device and torch.device as context manager

torch.set_default_device allows users to change the default device that factory functions in PyTorch allocate on. For example, if you torch.set_default_device(‘cuda’), a call to torch.empty(2) will allocate on CUDA (rather than on CPU). You can also use torch.device as a context manager to change the default device on a local basis. This resolves a long standing feature request from PyTorch’s initial release for a way to do this.

Learn more here.

The post PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever appeared first on Linux.com.

]]>
Slice and Save Costs with Open Packet Broker https://www.linux.com/news/slice-and-save-costs-with-open-packet-broker/ Tue, 21 Mar 2023 14:04:10 +0000 https://www.linux.com/?p=585239 Enterprise data centers continuously monitor network traffic to improve performance, provide better customer experience, and identify threats. All these appliances or tools require only a portion of the network payload to meet the monitoring requirements. Modern Network Packet brokers use “Packet truncation” technique to optimize the processing of network traffic which involves the removal of […]

The post Slice and Save Costs with Open Packet Broker appeared first on Linux.com.

]]>
Enterprise data centers continuously monitor network traffic to improve performance, provide better customer experience, and identify threats. All these appliances or tools require only a portion of the network payload to meet the monitoring requirements. Modern Network Packet brokers use “Packet truncation” technique to optimize the processing of network traffic which involves the removal of portions of network packets that are not needed for analysis.

Need for Packet Truncation

Reduce Storage: Network traffic payload can be very large (64 to 9216), and storing everything will be expensive. Packet truncation helps reduce the amount of data that needs to be stored by removing irrelevant or redundant information from packets.

Reduce CPU Cycles: Truncated packets require less processing to analyze, which can improve the overall speed and performance of the tools.

Simplify Analysis: Network administrators can easily identify network performance issues more quickly and efficiently since truncated packets have the relevant portions of the packet.

Improve Security: By removing sensitive information from the payload, security can be improved by limiting the exposure of confidential data.

Open Packet Broker for Truncation

Aviz Open Packet Broker industry first Packet broker solution built on Open Networking NOS SONiC supporting truncation on commodity ASICs supporting wire-speed packet truncation. Open Packet broker truncation has the following capabilities.

  • Packet Truncation based on custom offsets (48 bytes to 4094 bytes).
  • VLAN tag insertion for truncated packets for different tooling purposes.
  • Load Balance across tools for optimal processing 

Packet Truncation or slicing will allow only the user-defined byte from an incoming packet, and the remaining bytes are discarded. This helps in reducing the quantity of data processed on the tool port.


Figure 1: Truncation configured on a network port of flow1
Figure 2: Flow configure through APIs

Conclusion

Packet Truncation helps reduce storage requirements, improve analysis, speed up processing, and enhance network security. Open Packet Broker from Aviz Aviz OPB  improves cost savings by providing customers the choice of Open Networking HW SKUs supporting line-rate packet processing.

Authors: Chid Perumal, CTO, and Rajasekaran S, Member of Technical Staff, Aviz Networks

The post Slice and Save Costs with Open Packet Broker appeared first on Linux.com.

]]>
Open Source: Separating Fact from Fiction https://www.linux.com/news/open-source-separating-fact-from-fiction/ Fri, 10 Mar 2023 20:10:40 +0000 https://www.linux.com/?p=585219  

The post Open Source: Separating Fact from Fiction appeared first on Linux.com.

]]>
Read the original blog at The Linux Foundation 

The post Open Source: Separating Fact from Fiction appeared first on Linux.com.

]]>
Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates https://www.linux.com/news/blues-wireless-irnas-and-sternum-join-the-zephyr-project-as-widespread-industry-adoption-of-the-open-source-rtos-accelerates/ Thu, 23 Feb 2023 14:21:07 +0000 https://www.linux.com/?p=585175 See Zephyr RTOS in Action at Embedded World on March 14-16 in Nuremberg, Germany SAN FRANCISCO, February 23, 2023 – Today, the Zephyr® Project announced that Blues Wireless, IRNAS, and Sternum have joined as Silver members just as the real-time operating system (RTOS) has hit widespread adoption in products. Members such as Google, Meta, Oticon […]

The post Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates appeared first on Linux.com.

]]>
See Zephyr RTOS in Action at Embedded World on March 14-16 in Nuremberg, Germany

SAN FRANCISCO, February 23, 2023 Today, the Zephyr® Project announced that Blues Wireless, IRNAS, and Sternum have joined as Silver members just as the real-time operating system (RTOS) has hit widespread adoption in products. Members such as Google, Meta, Oticon and T-Mobile have products powered by Zephyr RTOS. 

“Adoption of Zephyr has increased dramatically in the last few years,” said Kate Stewart, Vice President of Dependable Embedded Systems. “In addition to Zephyr being used in a variety of industrial applications, we’re finding it in all sorts of emerging markets like wearables, trackers, intelligent IoT devices, animal monitoring systems, and more. We hope being product ready will help these new members and the community with development, delivery, and maintenance across a wide variety of additional devices and solutions.”

Products that are powered by Zephyr include: 

  • Google Chromebooks: The embedded controller is an ultra-low-power microcontroller that is always on. It is critical to the all-day battery life as it handles all the things a Chromebook has to do when the application processor is off or sleeping. Google recently decided to move the EC application to Zephyr so that  vendors can write their drivers once and capture design wins in product areas beyond Chromebooks. Zephyr’s device model is based on the industry standards of devicetree⁠ and Kconfig⁠. These technologies simplify the customization steps needed for each Chromebook model, lessening the engineering effort for Chromebook manufacturers. Learn more here.
  • Oticon MoreTM Hearing Aids: The revolutionary Oticon More is the world’s first hearing aid that allows users to hear all relevant sounds thanks to an on-board Deep Neural Network. It is powered by the Polaris chipset, integrating Zephyr RTOS for Bluetooth LE connectivity. This novel hearing instrument is an advanced medical product that will help millions of hearing-impaired people to a better quality of life. Learn more here.
  • T-Mobile’s DevEdge: The DevEdge is a self-serve developer platform that offers access to the T-Mobile network to create connected wireless solutions. The IoT Developer Kit, which runs on Zephyr RTOS, gives developers immediate access to T-Mobile’s network – no testing hardware, no lengthy build time required. Learn more here

Even as a new member, IRNAS has been using Zephyr RTOS for the last 4 years as part of their strategy to work with the best technologies to build industrial solutions for global clients, particularly focusing on Zephyr RTOS running on Nordic Semiconductor’s nRF52 and nRF91 series products. Advanced applied solutions range from critical infrastructure monitoring devices such as RAM-1 developed for Izoelektro all the way to livestock management and tracking products engineered for Telespor. As part of the IRNAS responsible environmental strategy, they also formed a partnership with Smart Parks to design Open Collar animal trackers for nature conservation. These are mounted on wildlife animal collars for monitoring and their safety.

“Zephyr has been at our core for a number of years, and now we are happy to take the next step and support the project that enabled us to build better connected products and be part of the Zephyr community,” said Luka Mustafa, CEO and Founder of IRNAS. “Zephyr RTOS has already achieved significant adoption in industry, enabling us to design applications and value add logic to products running on multiple architectures, designing products that can continue evolving over the decades to come without massive rewrites in the process.”

Blues Wireless and Sternum also joined the project as Silver members. 

“At Blues, we are proud to support the Zephyr Project and officially join the community,” said Brandon Satrom, Vice President of Developer Experience & Engineering at Blues. “Zephyr’s open source, multi-architecture approach is perfect for our customers as they scale, and are looking for a robust RTOS to pair with the device agnostic, secure, and simple cellular connectivity that Blues provides. We look forward to introducing more of our customers to Zephyr, and leveraging our expertise to help Zephyr developers add low-power cellular to their tool belt.”

“Zephyr is already the platform of choice for some of our largest customers, allowing us a clear view of how it’s being used to power medical devices, payment devices, gateways, and industrial infrastructure,” says Natali Tshuva, CEO and Co-Founder of Sternum. “We see growing demand from device manufacturers for advanced security controls, post-market surveillance capabilities, and threat mitigation that go beyond perpetual security patching. Our built-in support for Zephyr RTOS and toolchains allows us to address these needs and offer an easy way to bring our patented technology to all Zephyr-based devices.”

Zephyr, an open source project at the Linux Foundation that builds a safe, secure, and flexible RTOS for resource-constrained devices, is easy to deploy, connect and manage. It supports more than 450 boards running embedded microcontrollers from Arm and RISC-V to Tensilica, NIOS, ARC and x86 as single and multicore systems. It has a growing set of software libraries that can be used across various applications and industry sectors such as Industrial IoT, wearables, machine learning, and more. Zephyr is built with an emphasis on broad chipset support, security, dependability, longterm support releases, and a growing open source ecosystem. 

Zephyr Project Platinum members include Antmicro, Baumer, Google, Intel, Meta, Nordic Semiconductor, NXP, Oticon, Qualcomm Innovation Center, and T-Mobile. Silver members include AVSystem, BayLibre, Golioth, Infineon, Laird Connectivity, Linaro, Memfault, Parasoft, Percepio, SiFive, Silicon Labs, Synopsys, Texas Instruments, and Wind River. 

Where to see Zephyr 

Zephyr will be on-site at Embedded World on March 14-16 in Nuremberg, Germany. The booth, located in Hall 4 – Stand 170,  will host live demonstrations from members such as Antmicro,  AVSystem, Blues Wireless, Golioth, IRNAS, Memfault, Nordic Semiconductor, NXP, Parasoft and Sternum. Stop by to talk to project members and ambassadors about these demos and check out products running on Zephyr. Click here for a total list of demos and products featured at Embedded World. 

Additionally, the Zephyr community will be at the Zephyr Developer Summit, which takes place on June 27-30 in Prague, Czech Republic, and virtually as part of the first-ever Embedded Open Source Summit (EOSS). The annual Zephyr Developer Summit, which launched in 2021, is for developers using or considering Zephyr RTOS in embedded products. This year will focus on topics of interest to users, developers contributing upstreams and maintainers who want to learn more about technical details, products that leverage the RTOS and the ecosystem.

To submit a speaking proposal, click here by February 27. Learn more about sponsoring the event here or register to attend the event here.  

About the Zephyr Project

The Zephyr® Project is an open source, scalable real-time operating system (RTOS) supporting multiple hardware architectures. To learn more, please visit www.zephyrproject.org.

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

Media Contact:

Maemalynn Meanor

Director of Public Relations & Communications

maemalynn@linuxfoundation.org

The post Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates appeared first on Linux.com.

]]>