If you ask, you'll find that companies do everything they can to protect their systems. A big attack costs a lot of money and damages customer trust.
However, securing computer systems is drastically difficult. If you forget the mortar on a single brick, your house probably won't fall. But in IT security, that's exactly what happens.
A large majority of the internet runs on open source software. This software is maintained by volunteers, with no formal rules for code review beyond those they have set themselves. Plans are based on unpaid member availability, expertise and interests.
On the one hand, open source is essential. We can't ask every coder to re-fix the wheel. And really, the people who build and maintain the mundane open source projects that support the internet deserve to be canonized. But the voluntary basis of many open source projects means that a blind expectation of security and interoperability is a serious risk.
While essential projects often receive financial support from companies, this support is often insufficient compared to the work required to keep the project safe. Look no further than Heartbleed, the massive SSL vulnerability that existed for a decade before it was discovered.
Since a vast majority of security systems run on open source software, there is always the possibility of a hidden but devastating bug lurking in your favorite open source framework.
There is a phrase in the world of digital security:programmers need to win every time, but hackers only need to win once. A single chink in the armor is all it takes for a database to be compromised.
Sometimes this flaw is the result of a developer taking a shortcut or being negligent. Sometimes it is the result of an unknown zero-day attack. As careful as a developer might be, it's a wild ride to imagine you've patched all the security holes.
Declaring a lock "pick proof" is the quickest way to find out how optimistic your lock designers were. Computer systems are no different. No system is hackable. It all depends on the resources available.
As long as humans exist in the system at any stage, from design to execution, the system can be subverted.
Security is always a balance between convenience and security. A perfectly protected system can never be used. The more secure a system, the more difficult it is to use. This is a fundamental truth of system design.
Security works by erecting roadblocks that must be overcome. No roadblock worth running can take no time to complete. Therefore, the higher the security of a system, the less usable it is.
The humble password is the perfect example of these complementary qualities in action.
You could say that the longer the password, the harder it is to brute force, so long passwords for everyone, right? But passwords are a classic double-edged sword. Longer passwords are harder to crack, but they're also harder to remember. Now frustrated users will duplicate their credentials and write their credentials. I mean, what kind of sleazy character would look at the secret note under Debra's work keyboard?
Attackers don't have to worry about hacking passwords. All they need to do is find a post-it on the assistant (to) regional manager screen and they will have all the access they want. Not that Dwight would ever be so careless.
We balance security and convenience to keep our systems usable and safe. This means that every system is, in one way or another, insecure. Pirates just need to find a small hole and squeeze through it.
The defining characteristic of a hacking attack is system-level deception. Somehow you are inducing part of a security system to cooperate against its design . Whether an attacker convinces a human security guard to let them into a secure location or subverts security protocols on a server, this can be called a "hack".
The variety of attacks "in the wild" is extraordinary, so any summary can only offer a general overview before expanding on the details of individual attacks. At the very least, a budding hacker should learn the system they are attacking.
Once a vulnerability is discovered, it can be exploited. From a practical point of view, an attacker could search for open ports on a server to find out which services a device is running and under which version. Correlate that with known vulnerabilities, and most of the time, you'll find viable attack vectors. Many times it's a death by a thousand cuts:small vulnerabilities chained together to access them. If that doesn't work, there are password attacks, pretenses, social engineering, falsifying credentials...the list grows every day.
Once a vulnerability is discovered, the hacker can exploit it and gain unauthorized access. With this unauthorized access, they exfiltrate data.
And that's how data breaches happen all the time.