Advertisement

How Bitcoin Mining Works


In traditional fiat money systems, governments simply print more money when they need to. But in bitcoin, money isn’t printed at all – it is discovered. Computers around the world ‘mine’ for coins by competing with each other.

How does mining take place?

People are sending bitcoins to each other over the bitcoin network all the time, but unless someone keeps a record of all these transactions, no-one would be able to keep track of who had paid what. The bitcoin network deals with this by collecting all of the transactions made during a set period into a list, called a block. It’s the miners’ job to confirm those transactions, and write them into a general ledger.

Making a hash of it

This general ledger is a long list of blocks, known as the 'blockchain'. It can be used to explore any transaction made between any bitcoin addresses, at any point on the network. Whenever a new block of transactions is created, it is added to the blockchain, creating an increasingly lengthy list of all the transactions that ever took place on the bitcoin network. A constantly updated copy of the block is given to everyone who participates, so that they know what is going on.

But a general ledger has to be trusted, and all of this is held digitally. How can we be sure that the blockchain stays intact, and is never tampered with? This is where the miners come in.
When a block of transactions is created, miners put it through a process. They take the information in the block, and apply a mathematical formula to it, turning it into something else. That something else is a far shorter, seemingly random sequence of letters and numbers known as a hash. This hash is stored along with the block, at the end of the blockchain at that point in time.

Hashes have some interesting properties. It’s easy to produce a hash from a collection of data like a bitcoin block, but it’s practically impossible to work out what the data was just by looking at the hash. And while it is very easy to produce a hash from a large amount of data, each hash is unique. If you change just one character in a bitcoin block, its hash will change completely.

Miners don’t just use the transactions in a block to generate a hash. Some other pieces of data are used too. One of these pieces of data is the hash of the last block stored in the blockchain.
Because each block’s hash is produced using the hash of the block before it, it becomes a digital version of a wax seal. It confirms that this block – and every block after it – is legitimate, because if you tampered with it, everyone would know.

If you tried to fake a transaction by changing a block that had already been stored in the blockchain, that block’s hash would change. If someone checked the block’s authenticity by running the hashing function on it, they’d find that the hash was different from the one already stored along with that block in the blockchain. The block would be instantly spotted as a fake.

Because each block’s hash is used to help produce the hash of the next block in the chain, tampering with a block would also make the subsequent block’s hash wrong too. That would continue all the way down the chain, throwing everything out of whack.

Competing for coins

So, that’s how miners ‘seal off’ a block. They all compete with each other to do this, using software written specifically to mine blocks. Every time someone successfully creates a hash, they get a reward of 25 bitcoins, the blockchain is updated, and everyone on the network hears about it. That’s the incentive to keep mining, and keep the transactions working.

The problem is that it’s very easy to produce a hash from a collection of data. Computers are really good at this. The bitcoin network has to make it more difficult, otherwise everyone would be hashing hundreds of transaction blocks each second, and all of the bitcoins would be mined in minutes. The bitcoin protocol deliberately makes it more difficult, by introducing something called ‘proof of work’.

The bitcoin protocol won’t just accept any old hash. It demands that a block’s hash has to look a certain way; it must have a certain number of zeroes at the start. There’s no way of telling what a hash is going to look like before you produce it, and as soon as you include a new piece of data in the mix, the hash will be totally different.

Miners aren’t supposed to meddle with the transaction data in a block, but they must change the data they’re using to create a different hash. They do this using another, random piece of data called a ‘nonce’. This is used with the transaction data to create a hash. If the hash doesn’t fit the required format, the nonce is changed, and the whole thing is hashed again. It can take many attempts to find a nonce that works, and all the miners in the network are trying to do it at the same time. That’s how miners earn their bitcoins.


Apple Makes Swift Open Source for Developers



In 2014, Apple had introduced a new programming language ‘Swift’ which was meant to make coding easier for iOS or OS X based apps. Yesterday, Apple took a major step by making the language open source. Now, developers from outside will be able to make developments and bring it to new platforms like Windows or Android.

The company has already released the language on Linux through its partners IBM. IBM will now be able to make apps on the language and include them on the Linux servers, who already speak Swift. Apple has started paving the path for Swift to enter different platforms.

In a recent statement, Apple said that it wanted Swift to become one of the core programming languages in the next 20 years. If Swift becomes what the company says, it will benefit them as more people will be able to code apps on their devices.

Apple stated that it would run the open source project from the website Swift.org, and would the source code through GitHub. The company has also seeded the project with various tools, while frameworks like AppKit and UIKit will remain exclusive to Mac and iOS app development. However, the published core libraries include some components of AppKit and UIKit, like threading, common data types and networking stack.

Though Apple has not yet released any specific figures on how much uptake Swift has had with developers, it quoted a few examples of apps like LinkedIn and Yahoo weather which use Swift. The language is supposed to be faster than its predecessors and will also be able to link with tvOS and watchOS.

Through Apple’s backing, the software seems to be making a lot of noise in the market and is already one of the famous languages, keeping the time gauge in mind. According to Tiobe, the website which rates programming languages has ranked Swift with a position of 15, keeping SQL and MATLAB below it.

What Is The Marianas Web?

 

The surface is what we think of as “the internet”: the indexed, publicly available portions of the web that you can Google.

The first layer of the “deep web” is those pages on “the internet” that are for a variety of reasons unlisted or restricted-access, rendering them inaccessible to most: members-only, password-protected content, pages that haven’t yet been indexed, new pages, and pages that aren’t indexed because of DMCA complaints (those in particular can be accessed with a little bit of boolean ingenuity that I’ll not go into here).

The second layer consists of the sleazy, clandestine, contraband-type stuff that most people think of when they think of the “deep web”, and it trafficks itself  through anonymizing software (Tor, e.g.), .onion links, and p2p/p2m networks. This is what most people think of when they think of the “Deep web”. It is difficult to access, is creepy, and is therefore notorious. It consists primarily of pirated software and movies, drugs, guns, gambling, offers of contract murder, and illicit pornography. Avoid it. As a wise man once said, “nothing good happens in that part of town after midnight”.

The third and fourth layers are where you see the bulk of the traffic, and is where the truly interesting stuff lives. Most information on these layers of the “deep web” is hidden primarily because it is proprietary or classified. The bulk of deep traffic is on alternate and private networks (that is to say, corporate and government traffic). Layer three consists of classified information trafficked on alternative networks such as JWICS, SIPRNet, and NSANet. The fourth layer is internal corporate traffic hosted on private PANs, WANs and LANs, inaccessible unless you’re already inside or that network is otherwise connected to the “internet” (the indexed, “googleable” surface web that we use every day).

If you consider the way he defines the third and fourth layers then that’s pretty much everything that could be on what one would reasonably term a ‘web’ and so, excludes individual disconnected devices.
There really can’t be anything below Deep Web (though I would happily stand corrected if need be), because it’s just not possible. There’s no seabed to the Deep Web, sure, but I feel like the existing clearly-defined layers are mutually exclusive and comprehensively exhaustive:
  • Layer 1: Internet-facing, Listed, Can be accessed by various non-obvious methods
  • Layer 2: Internet-facing, Unlisted, Can be accessed by obvious and non-obvious methods (primarily through enforced anonymity)
  • Layer 3: Large(/internet)-scale network but not Internet-facing, Unlisted, Can be accessed only through security flaws or direct access (e.g. via a networked device that is also Internet-facing or through physical access to networked devices or infrastructure)
  • Layer 4: Smaller-scale networks and not Internet-facing, Unlisted, Can be accessed only through security flaws (as above)

Can you keep Linux-based ransomware from attacking your servers?

 

According to SophosLabs, Linux/Ransm-C ransomware is one example of the new Linux-based ransomware attacks, which in this case is built into a small command line program and designed to help crooks extort money through Linux servers.

“These Linux ransomware attacks are moving away from targeting end users and gravitating toward targeting Linux servers, web servers specifically, with a piece of software that encrypts data and is similar to what we’ve seen in previous years such as CryptoWall, CryptoLocker, and their variants,” explains Bill Swearingen, director of Cyber Defense, CenturyLink.

As long as attackers can leverage the ease of coding strong encryption and the high availability of anonymous currencies and anonymous hosting, ransomware is here to stay, says Swearingen. With security organizations like SophosLabs seeing and tracking new variants of Linux ransomware, enterprises should make themselves aware of its risks and cures, since as server owners, users, or operators they are prime targets.

Typical target trip ups

With the amount of open-source software in use on Linux web servers, it is very easy for attackers to take advantage of these CMS systems such as WordPress, Drupal, and Joomla with their many unpatched vulnerabilities and exploit them, insinuating these Linux encoders / ransomware and holding enterprise web servers and their data in exchange for some form of booty, says Swearingen. 

Though the first rounds of Linux ransomware have been poorly coded, according to Swearingen, coming rounds will be increasingly more effective. As attackers are writing the next wave of Linux encoders, enterprises need to prepare to withstand their effort.

One obvious answer to Linux malware is to keep those CMS products and the web servers continually patched and updated. But patching produces its own challenges. Even Linux web servers have many layers that the enterprise needs to patch, says Swearingen, including the OS layer, the application layer, and the database layer. "Traditionally companies focus on the operating system layer, running vulnerability scanners. But it’s the applications that the attackers are targeting,” says Swearingen. The enterprise needs to expend effort to uncover and patch holes at all levels. That comes with additional investments in time and money.

Immediate patching is often impractical since patches may be flawed, creating their own issues. Enterprises should thoroughly test new patches before installation to production environments. Proper testing also comes at the sacrifice of time, effort, and additional finances. Enterprises have thresholds where they can begin to afford testing and below that, many cannot justify the expense. 

Even patches that generally function properly may negatively affect certain adjacent applications and software dependencies with conflicts and lack of interoperability so that these CMS and other Linux web server products experience faults or stop working altogether. Ultimately, the enterprise will have to weigh the risks and costs of patching as they approach patching solutions.
Beware: if the same vulnerabilities remain unpatched for years, this is what most attackers are targeting. “Attackers are utilizing well known, well documented vulnerabilities in externally facing applications,” says Swearingen. You must patch eventually, or expect to become a statistic.

Secure development

The best place to secure web applications is at the start, in development. When developers follow coding standards that address the riskiest vulnerabilities that an application can have, they greatly mitigate the potential for successful attacks. “In any custom application, ensure that your developers are referencing the OWASP Top Ten,” says Swearingen.

The OWASP Top Ten application security risks include injection flaws, poorly implemented authentication, cross-site scripting flaws, direct object references, insecure configurations, sensitive data, PII exposure, missing function level access controls, cross-site request forgeries, known-vulnerable components, and unvalidated redirects. In each case, the security hole permits an attacker to insert or access data or components, leading to a broader compromise.

By checking and closing each vulnerability as they create an app, developers can deal the greatest blow to attacks before the app even sees the light of day.

Vulnerability monitoring

As with the OS, there are vulnerability scanners tailored to CMS and web applications. “If you’re running a Word Press site, there is an application called WPScan that can test it,” says Swearingen. HackerTarget makes multiple web and CMS scanners available. OWASP has a WordPress scanner. There are also scanners that can check the source code.

Of course, once you find a vulnerability you will have to either patch it or find some other solution such as a WAF to secure around it.

“You’ll want to implement security best practices including a backup strategy that you can test and confirm works to restore the system in the event that an attacker does encrypt it and hold it for ransom,” says Swearingen. If someone else has provided the server and the enterprise is in charge of the application layer, you should backup the application and perhaps the database, depending on your circumstance, he explains.

To ensure that backups as an approach in general will really counter a Linux ransomware attack, keep the backups on a different system at a different location with different credentials, so that a compromise of the server is not automatically also a compromise of the backups. “Whether you are using server snapshots for backups, make sure the backup system is not mounted from the original server that is subject to compromise,” says Swearingen.

Security practices

Security best practices will lead the enterprise to segment web servers from any externally exposed server and from other networks, and to use highly restrictive access controls, says Swearingen. You should always use all applicable layers of defense that are available to you.

Additional security practices including running certain apps in containers to isolate them from the rest of your systems, says Ben Johnson, chief security strategist, Bit9+Carbon Black. “An app or web server in one container can’t leave it to attack an app in a different container. Even if an attack landed in one container, it wouldn’t get back out to attack something else,” says Johnson.

There are well-known forms of controls that help, too. “Use whitelisting to approve apps that can run on your web server and block everything else,” says Johnson. Hardening systems including closing unused ports goes hand in hand with application blocking.


Google launches wi-fi network in Kampala, Uganda


Google has launched its first wi-fi network in Uganda's capital Kampala, as part of a project to broaden access to affordable high-speed internet.
The company is making the broadband wireless network available to local internet providers, who will then charge customers for access.
The web giant says the network is now live in 120 key locations in Kampala.
Official statistics show Uganda has about 8.5 million internet users, making up 23% of the population.
Google hopes that by improving internet capacity in the city, local telecom companies will then be able to offer faster, cheaper broadband access to their customers.
The company estimates that one day's unlimited data using the new network should cost 1,000 Ugandan shillings ($0.30, £0.20), although local providers will decide how much they want to charge for the service.
Critics say it would have been better to focus on Uganda's rural areas, where high-speed internet access is very limited.
The wireless network forms part of a wider project to improve web infrastructure in Africa, which has seen Google lay 800km (500 miles) of cables in Uganda to establish a fibre optic network.
There are now plans to expand the project to the Ghanaian cities of Accra, Tema and Kumasi.
In October, Facebook announced its own initiative to increase access to the internet in Africa by using satellites.