Sony Electronics Announces the New WF-C700N Truly Wireless Noise Canceling Earbuds with Comfortable, Stable Fit and Immersive Sound, also Introduces the WH-1000XM5 Headphones in New Midnight Blue Color

The new WF-C700N offer comfortability and high-quality sound while the WH-1000XM5 Midnight Blue delivers powerful sound in style

SAN DIEGO, April 3, 2023 /PRNewswire/ — Sony Electronics Inc. today announced the addition of the WF-C700N truly wireless earbuds, WF-C700N, which are small and lightweight for all day comfort, perfect for those looking for their first pair of truly wireless noise canceling earbuds. In addition, Sony is also announcing the WH-1000XM5 in Midnight Blue, with all of the great noise canceling features of the critically acclaimed WH-1000XM5 in a stylish blue color.

A comfortable fit for on the go
The WF-C700N have been designed with comfort and stability in mind, for an inclusive fit. Sony has designed the WF-C700N by looking at extensive ear shape data collated since it introduced world’s first in-ear headphones in 1982, as well as evaluation of the sensitivity of various types of ears. The WF-C700N earbuds combine a shape to perfectly match the human ear with an ergonomic surface design for a more stable fit, so users can wear them longer without needing a break.

The cylindrical charging case is small and easy to carry around in a pocket or bag so they can be taken anywhere consumers want to go. Additionally, the WF-C700N are available in black, white, lavender and sage green colors, with a geometrically patterned texture for a luxurious look and feel.

More music, less background noise
With the WF-C700N it’s just the listener and their music. Users can diminish background noise with noise canceling or use the Ambient Sound Mode to stay connected to their natural surroundings.

Consumers can personalize their settings within the Sony | Headphones Connect app or use the Focus on Voice setting to chat without removing the earbuds1. These features make it simple to step into a coffee shop and quickly order with ease, then just as quickly sit back, and enjoy entertainment -distraction-free.

The WF-C700N also features Adaptive Sound Control which adjusts ambient sound settings depending on where the user is and what they’re doing. It recognizes locations frequently visits, such as offices, the gym or coffee shops, and switches the sound modes that suit the situation. With this, consumers can seamlessly move through their surroundings all while letting them enjoy their favorite artists and entertainment.

Hear every beat in high quality
The WF-C700N deliver high quality sound thanks to DSEE (Digital Sound Enhancement Engine). With the help of Sony’s original 5mm driver unit the WF-C700N packs a punch, producing powerful bass and stunningly clear vocals despite their small size, bringing out the best in whatever genre or entertainment is chosen. Users can also change their music to fit their taste with the EQ settings on the Sony | Headphones Connect app.1

Enjoy an effortless listening experience
With a long-lasting battery life of up to 15 hours2, IPX4 water resistance3 and smart features, the WF-C700N truly wireless

Read More

Memory safe and sound programming languages are on the increase. Here’s how developers ought to answer

Picture: Maskot / Getty

Builders throughout federal government and business really should dedicate to employing memory risk-free languages for new goods and tools, and establish the most essential libraries and offers to shift to memory harmless languages, in accordance to a research from Buyer Reports.

The US nonprofit, which is recognized for testing shopper solutions, questioned what actions can be taken to support usher in “memory harmless” languages, like Rust, about choices this kind of as C and C++. Purchaser Stories mentioned it needed to tackle “marketplace-wide threats that are not able to be solved via user habits or even client alternative” and it identified “memory unsafety” as one particular this kind of issue. 

The report, Potential of Memory Basic safety, seems to be at selection of troubles, such as issues in setting up memory harmless language adoption inside of universities, stages of distrust for memory risk-free languages, introducing memory harmless languages to code bases penned in other languages, and also incentives and general public accountability.       

Also: Programming languages: Why this old favourite is on the increase once more

In the course of the past two a long time, far more and a lot more tasks have started off steadily adopting Rust for codebases created in C and C++ to make code a lot more memory safe. Between them are initiatives from Meta, Google’s Android Open Source Task, the C++-dominated Chromium project (sort of), and the Linux kernel. 

In 2019, Microsoft unveiled that 70% of safety bugs it had mounted throughout the earlier 12 yrs were memory security troubles. The determine was significant since Windows was composed largely in C and C++. Because then, the Nationwide Stability Agency (NSA) has proposed builders make a strategic shift absent from C++ in favor C#, Java, Ruby, Rust, and Swift.  

The shift in direction of memory safe and sound languages — most notably, but not only, to Rust — has even prompted the creator of C++, Bjarne Stroustrup and his peers, to devise a plan for the “Security of C++”. Developers like C++ for its efficiency and it continue to dominates embedded methods. C++ is however way far more extensively made use of than Rust, but the two are common languages for methods programming.  

The Buyer Stories analyze involves input from quite a few outstanding figures in information and facts safety, as nicely as reps from the Cybersecurity and Infrastructure Security Company (CISA), Online Safety Exploration Group, Google, the Workplace of the Nationwide Cyber Director, and a lot more. 

The report highlights that laptop science professors have a “golden option below to reveal the hazards” and could, for case in point, raise the fat of memory protection blunders in examining grades. But it provides that instructing parts of some courses in Rust could incorporate “inessential complexity” and that there is certainly a perception Rust is more challenging to learn, when C appears to be a risk-free wager for employability in future for a lot of pupils. 

The report suggests the marketplace

Read More

Researchers simulate ‘fingerprint’ of sound on quantum computer

Credit history: Graham Carlow, IBM / CC BY-ND 2.

For humans, history noise is normally just a slight irritant. But for quantum computer systems, which are quite delicate, it can be a demise knell for computations. And simply because “sounds” for a quantum computer increases as the pc is tasked with additional advanced calculations, it can immediately become a key obstacle.

But because quantum computers could be so exceptionally beneficial, researchers have been experimenting with techniques to get about the sounds challenge. Typically, they consider to evaluate the sound in purchase to appropriate for it, with mixed achievements.

A group of experts from the College of Chicago and Purdue College collaborated on a new approach: As a substitute of instantly making an attempt to measure the noise, they alternatively build a unique “fingerprint” of the sound on a quantum laptop as it is seen by a plan operate on the laptop or computer.

This technique, they say, reveals assure for mitigating the sounds problem—as well as suggesting methods that users could actually convert sounds to their benefit.

“We wondered if there was a way to operate with the sounds, as an alternative of towards it,” stated David Mazziotti, professor in the Division of Chemistry, James Franck Institute and the Chicago Quantum Trade and a co-creator on the examine, which was printed Jan. 25 in Character Communications Physics.

‘A fresh approach’

Quantum pcs are based on the rules of how particles behave at the atomic degree. Down at that level, particles obey a established of really unusual policies they can be in two distinctive states at the moment, or turn out to be ‘entangled’ throughout room. Researchers hope to harness these qualities as the foundation for computers.

In specific, lots of scientists want to use quantum computers to far better understand the policies of the purely natural globe, for the reason that molecules work according to the laws of quantum mechanics—which should really theoretically be a lot easier to simulate applying a quantum pc.

But in spite of major improvements in quantum computing know-how about the previous decade, computational potential has lagged driving scientists’ hopes. Several experienced assumed that expanding the selection of computer bits—”qubits,” for quantum computers—would help ease the noise difficulty, but due to the fact sound restrictions precision, scientists still haven’t been capable to complete many of the computations they would like.

“We thought it may well be time for a contemporary solution,” mentioned co-writer Sabre Kais, professor of physics and chemistry at Purdue University.

To day, researchers have tried using to fully grasp the influence of sounds by specifically measuring the noise in every qubit. But cataloging these discrete improvements is hard, and, the team believed, most likely not usually the most productive route.

“Rather frequently in physics, it is really a lot easier to recognize the in general actions of a procedure than to know what each and every part is carrying out,” claimed co-creator Zixuan Hu, a postdoctoral researcher at Purdue. “For illustration, it

Read More

A Transistor for Sound Points Toward Whole New Electronics

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely available—along with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people’s fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be
harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google’s Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problem—using optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren’t so complicated really. They involve operations on
matrices, which are just rectangular arrays of numbers—spreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider

Read More