Tech CEO points out what’s causing mass layoffs

Tech CEO points out what’s causing mass layoffs

Irrespective of massive layoffs in the tech business above the earlier 12 months, just one CEO is in using the services of method.  

Fred Voccola, the CEO of Miami-primarily based software package corporation Kaseya, mentioned why the business is having difficulties and how his business is keeping away from pink slips on “The Massive Income Show” Tuesday.

“What we’re locating in the tech sector is a ton of the technologies organizations overextended them selves. And the main cause for it is their prospects,” Voccola explained to FOX Business’ Brian Brenberg. 

“Most of the potential buyers of technologies, if you consider about a LinkedIn or a Microsoft or a Fb, the vast majority of their consumers are large business corporations. And individuals enterprise companies have put in the very last 15 many years digitally reworking by themselves or investing massive quantities of income to make them digital-to start with firms. We’re sort of at the conclusion of that phase now. So the engineering businesses haven’t effectively altered their OpEx or their shelling out to account for that. So they are seeing a slowdown in paying out from their buyers, and they’ve recognized that they’re overextended. So they are chopping back fairly aggressively,” he explained. 

AMERICA’S Reduced LABOR PARTICIPATION Level ‘A SOCIAL AND Economic Disaster,’ Specialists Warn

Mass layoffs at providers together with Amazon, Meta, Salesforce, and most lately LinkedIn rocked the tech sector above the previous year, leaving hundreds without a area to do the job. 

Shares in this Posting

Voccola believes aspect of the issue lies inside labor prices. According to the Employment Charge Index (ECI), U.S. labor charges rose 1.2% in the to start with quarter of 2023 and 4.8% 12 months-over-year from March 2022 to March 2023. 

“In the very last nine months, they’ve [labor costs] continue to gone up. I feel we’re heading to see them go up for the following year or two. The labor costs are pretty high,” he explained. 

Having said that, specific locations of the U.S., including South Florida wherever his business is headquartered, are not seeing a immediate enhance in labor prices, Voccola pointed out.  

“Depending geographically exactly where persons are positioned, the charge of increase is slower. For case in point, in Silicon Valley, the rate of enhance is astronomical. We’re a Miami-centered organization, so we have a minimal much more reasonable labor prices. But the costs of labor are nevertheless going up.”

AMERICA’S Minimal LABOR PARTICIPATION Fee ‘A SOCIAL AND Economic Catastrophe,’ Authorities Alert

Voccola went on to demonstrate that he moved the firm from California to the “really company-pleasant” Miami exactly where it has expanded to do small business in a lot more than 10 nations. 

“You have a really determined workforce and a incredibly price-productive labor power and a excellent

Read More

A Transistor for Sound Points Toward Whole New Electronics

A Transistor for Sound Points Toward Whole New Electronics

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely available—along with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people’s fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be
harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google’s Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problem—using optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren’t so complicated really. They involve operations on
matrices, which are just rectangular arrays of numbers—spreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider

Read More