Friday, January 3, 2020

Black hacking

Not all hackers are inherently bad. When used in mainstream media, the word, “hacker,” is usually used in relation to cyber criminals, but a hacker can actually be anyone, regardless of their intentions, who utilizes their knowledge of computer software and hardware to break down and bypass security measures on a computer, device or network. Hacking itself is not an illegal activity unless the hacker is compromising a system without the owner’s permission. Many companies and government agencies actually employ hackers to help them secure their systems.
Hackers are generally categorized by type of metaphorical “hat” they don: “white hat”, “grey hat”, and “black hat”. The terms come from old spaghetti westerns, where the bad guy wears a black cowboy hat, and the good guy wears a white hat. There are two main factors that determine the type of hacker you’re dealing with: their motivations, and whether or not they are breaking the law.


Black Hat Hackers

Like all hackers, black hat hackers usually have extensive knowledge about breaking into computer networks and bypassing security protocols. They are also responsible for writing malware, which is a method used to gain access to these systems.
Their primary motivation is usually for personal or financial gain, but they can also be involved in cyber espionage, protest or perhaps are just addicted to the thrill of cybercrime. Black hat hackers can range from amateurs getting their feet wet by spreading malware, to experienced hackers that aim to steal data, specifically financial information, personal information and login credentials. Not only do black hat hackers seek to steal data, they also seek to modify or destroy data as well.

Why hackers love public WiFi

If you decide to use public Wi-Fi, just be aware that you could be making yourself an easy target for hackers — and putting your information and more at risk.

White Hat Hackers

White hat hackers choose to use their powers for good rather than evil. Also known as “ethical hackers,” white hat hackers can sometimes be paid employees or contractors working for companies as security specialists that attempt to find security holes via hacking.
White hat hackers employ the same methods of hacking as black hats, with one exception- they do it with permission from the owner of the system first, which makes the process completely legal. White hat hackers perform penetration testing, test in-place security systems and perform vulnerability assessments for companies. There are even courses, training, conferences and certifications for ethical hacking.

Grey Hat Hackers

As in life, there are grey areas that are neither black nor white. Grey hat hackers are a blend of both black hat and white hat activities. Often, grey hat hackers will look for vulnerabilities
 in a system without the owner’s permission or knowledge. If issues are found, they will report them to the owner, sometimes requesting a small fee to fix the issue. If the owner does not respond or comply, then sometimes the hackers will post the newly found exploit online for the world to see.
These types of hackers are not inherently malicious with their intentions; they’re just looking to get something out of their discoveries for themselves. Usually, grey hat hackers will not exploit the found vulnerabilities. However, this type of hacking is still considered illegal because the hacker did not receive permission from the owner prior to attempting to attack the system.
Although the word hacker tends to evoke negative connotations when referred to, it is important to remember that all hackers are not created equal. If we didn’t have white hat hackers diligently seeking out threats and vulnerabilities before the black hats can find them, then there would probably be a lot more activity involving cybercriminals exploiting vulnerabilities and collecting sensitive data than there is now.

Tuesday, October 15, 2019

Huawei 5g technology

Germany has finished making industry rules for the construction of its upcoming 5G networks and has decided not to exclude Huawei from the list of vendors, against the wishes of the US. The Chinese tech giant has been hampered by a US decision earlier this year to exclude the firm from working with domestic companies over espionage concerns. The decision comes amid an escalating trade war between the two countries and the US warned its Western allies that data-sharing agreements could be forfeited if they did not follow suit. Reuters reports that German government officials have confirmed its security catalogue, which will not bar any single vendor in order to create a level playing field for equipment vendors. “We are not taking a pre-emptive decision to ban any actor or any company,” Steffen Seibert, the German government spokesperson, told a news conference. German operators are all customers of Huawei and have warned that banning the Chinese vendor would add years of delays and billions of dollars in costs to launching 5G networks. The decision over whether the UK will allow the Chinese firm to participate in building its 5G network is still to be decided, with the Government saying it will make its final verdict in the autumn. The United States has piled pressure on its allies to shut out Huawei, one of the world’s leading telecoms equipment vendors with a global market share of 28 per cent. Last week, the US pressure was extended to several Chinese AI start-ups including video surveillance firm Hikvision, which the US has also blocked from working with US firms. Huawei has denied Washington’s allegations, although US officials have argued that under China’s national intelligence law all citizens and companies are required to collaborate in espionage efforts. With billions of devices, sensors and cameras expected to be hooked up, 5G networks will be far more ubiquitous than their predecessors. At the same time, the fact that 5G networks rely more on software that can be easily updated makes it harder to keep track of cyber threats. The German rules come after the European Union warned last week of the risk of increased cyber attacks on 5G networks by state-backed actors. A report compiled by member states stopped short, however, of singling out China as a threat.

Tuesday, August 6, 2019

Upcoming 7G in world...

So far 7G is just a figment of our imagination, yet it is an indication that we have come to take for granted that evolution will not stop and something new will become reality. Image credit: Depositphotos

Two days ago I published a post predicting that 6G will become reality around 2035, meaning that people will really start using it. Of course I am expecting labs demo and trials well before that, like it happened in the previous Gs and it is happening now with 5G.
Yet, among the several comments I received, one was stating with certainty that 7G (yes, 7, not 6) will be available in 2032. Unfortunately the comment did not articulate on what bases that prediction was made. Anyhow, it made me think. Could we reasonably expect a 7G to occur some 13 years from now?
If I look back I can see an amazing linearity in evolution (time-wise) for Gs: we basically have 10 years separation from one G to the next, both if you take as reference frame the first trials, the first deployment, the mass market uptake and even its demise. Based on this one should say that as we will have real mass market adoption of 5G in the first part of the next decade (I placed 2025 as the point where most everybody would be able to own a 5G device and get access to a 5G network -and I consider this as an ambitious target) than 6 G should come after 10 years, in the 2035 time frame and 7G 10 years further down the lane, i.e. 2045.
On the other hand, some people are making forecast based on the law of accelerated returns, like Ray Kurzweil, stating that evolution is accelerating and what used to take 10 years will be taking 7 years, and then just 5 and so on. I guess that based on this accelerate return hypotheses you get a shift from 5G to 7G in 12/13 years and that might be the reason behind the comment (I hope the one who made it will come forward with some substantiation, always nice to hear different viewpoints).
Personally, I do not think that the law of accelerated returns works in infrastructures. This is an area where evolution is steered by economics, with technology being an enabler. Besides, the economic drive, provided by the demand side, tends to become weaker and the overall infrastructure is exceeding customers/users needs. More on this in a few moments.
There is another reasons why I do not think we will be seeing an acceleration towards new Gs. If you think about the basics of wireless you see that the real technology driver that has enabled the progress from one G to the next has to be found in the evolution (increase) of processing capability flanked by the increase in battery density (capacity). This latter, however, cannot go on forever (although we still have plenty of room to decrease the need for power of electronic components -the Landauer limit is some 100 years away-) because as you increase power use you have to manage the power dissipation and you won’t like to have a red hot brick in your hand…
The processing capacity increase made possible to use higher and higher frequencies, with 6G we might jump into the THz space, but now the processing capacity increase is levelling out when we look at a single chip. In mobile devices you don’t want to have many chips, they won’t fit in the sleek cases we have come to love, and they would increase power consumption. We will keep seeing for a few more decades processing capacity increase but at a lower pace (the areas of GPUs was an exception to these rules but that was the result , mostly, of parallel processing).
So on the technology side there will be evolution and it might just be that the law of accelerated returns may compensate for the slowing down in technology evolution in the processing area, being able to maintain the pace we have seen so far. I doubt it could accelerate that pace.
Back now to economic considerations. As we move up in frequency (something enabled by the increased processing capacity) we are confronted with propagation issues, and we are forced to either increase the wireless power (something that is not allowed by regulators and that would decrease battery time) or make cells smaller. The latter is what needs to happen. The problem with smaller cells is that you need to increase the investment at the edges. The number of antennas required by 5G, assuming you want to deliver the 5G capacity potential and cover the same area, is ten times more than 4G: that means a huge infrastructure investment (in antennas, optical fibre drops and space rent).
You move into the THz and you have to scale up the density of the infrastructure with skyrocketing cost.
Actually, I do not think that a 6G pervasive infrastructure can be economically sustainable, based on today’s business models and approach to infrastructures ownership.
What is most likely to happen (it is already part, technically speaking, of the 5G architecture) is to have the wireless edge created by a myriad of players, each one investing on his own, with no need to recap the investment in a direct way (i.e. by getting money back). What I can imagine is that I will buy a new device, a car, a wearable smartphone… whatever, and that device will become an edge network node. The big infrastructures, pipes, will grow at their own pace, sufficient to manage any increase in traffic, and the costly edges will grow asynchronously through disseminated and dispersed investment. In this sense there might be some devices that will start using higher frequencies in the tens of THz 20 years from now but I would not consider them as a real shift to a new G, to 7G, as I did not consider 5G some wireless point to point experiments in the years as 5G.
An additional point to take into consideration is the time it takes to define and agree a new infrastructure standard. It is likely that in the future we will need to change some aspects of the standardisation process but that, also, will take time. Industries are looking for standards to enable new business in an effective way but at the same time they are slow in progressing to protect existing investment.

Friday, July 26, 2019

Artificial intelligence

Robotic automation: Upskilling transaction workers

Throughout history technology has improved and eliminated processes – think of examples such as bookselling, banking and online shopping. Physical activity and paper have been eliminated, enabling transactions to be completed more simply, accurately and quickly – and new opportunities have been created both for firms and for their employees.

Robotics in the office is a rapidly expanding branch of technology-driven productivity enhancement. It exists along a continuum of automation that extends from application software like the spreadsheet or machines like the calculator at one end, to artificial intelligence at the other. This automation is designed to replace human activity and to make processes better, faster, cheaper.

The simplest robotic automation involves data entry; here the robot takes input, interprets it, validates it and processes it. At the next level, there’s decision-making automation within a defined framework; here the robotics solution has logic built in to make decisions and carry out tasks according to predefined rules. The next generation of robots will address “the knowledge worker” where intelligence rather than rules are required to complete a task.

DXC Technology uses robotics in our work for clients to solve problems from the everyday to the one-off. We have more than 200 examples of automation using a range of macros, keystroke emulation and scripting tools, specialist fraud detection and proprietary tools.

Robotic automation enables us to spend more time on value adding activity for our clients, rather than data entry and manipulation. The benefits associated with the ability to process large volumes of data accurately and quickly include reduced manual effort and associated costs,  reduced fraud and increased controls.

Read the white paper, "Robotic automation –upskilling transaction workers" to learn more about:

The role robotic automation can serve in the workplace

How robotic automation can boost worker productivity

An example of a robotic solution DXC Technology used to solve a client’s business challenge





Tuesday, January 22, 2019

Hacking

Definition - What does Hacking mean?
Hacking generally refers to unauthorized intrusion into a computer or a network. The person engaged in hacking activities is known as a hacker. This hacker may alter system or security features to accomplish a goal that differs from the original purpose of the system.
Hacking can also refer to non-malicious activities, usually involving unusual or improvised alterations to equipment or process.

Techopedia explains Hacking

Hackers employ a variety of techniques for hacking, including:
  • Vulnerability scanner: checks computers on networks for known weaknesses
  • Password cracking: the process of recovering passwords from data stored or transmitted by computer systems
  • Packet sniffer: applications that capture data packets in order to view data and passwords in transit over networks
  • Spoofing attack: involves websites which falsify data by mimicking legitimate sites, and they are therefore treated as trusted sites by users or other programs
  • Root kit: represents a set of programs which work to subvert control of an operating system from legitimate operators
  • Trojan horse: serves as a back door in a computer system to allow an intruder to gain access to the system later
  • Viruses: self-replicating programs that spread by inserting copies of themselves into other executable code files or documents
  • Key loggers: tools designed to record every keystroke on the affected machine for later retrieval
Certain corporations employ hackers as part of their support staff. These legitimate hackers use their skills to find flaws in the company security system, thus preventing identity theft and other computer-related crimes
Image result for hacking

Sunday, January 13, 2019

Artificial intelligence

Since the invention of computers or machines, their capability to perform various tasks went on growing exponentially. Humans have developed the power of computer systems in terms of their diverse working domains, their increasing speed, and reducing size with respect to time.
A branch of Computer Science named Artificial Intelligencepursues creating the computers or machines as intelligent as human beings.

What is Artificial Intelligence?

According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.

Philosophy of AI

While exploiting the power of the computer systems, the curiosity of human, lead him to wonder, “Can a machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines that we find and regard high in humans.

Goals of AI

  • To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advice its users.
  • To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans.

What Contributes to AI?

Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of computer functions associated with human intelligence, such as reasoning, learning, and problem solving.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.
Components of AI

Programming Without and With AI

The programming without and with AI is different in following ways −
Programming Without AIProgramming With AI
A computer program without AI can answer the specificquestions it is meant to solve.A computer program with AI can answer the generic questions it is meant to solve.
Modification in the program leads to change in its structure.AI programs can absorb new modifications by putting highly independent pieces of information together. Hence you can modify even a minute piece of information of program without affecting its structure.
Modification is not quick and easy. It may lead to affecting the program adversely.Quick and Easy program modification.

What is AI Technique?

In the real world, the knowledge has some unwelcomed properties −
  • Its volume is huge, next to unimaginable.
  • It is not well-organized or well-formatted.
  • It keeps changing constantly.
AI Technique is a manner to organize and use the knowledge efficiently in such a way that −
  • It should be perceivable by the people who provide it.
  • It should be easily modifiable to correct errors.
  • It should be useful in many situations though it is incomplete or inaccurate.
AI techniques elevate the speed of execution of the complex program it is equipped with.

Applications of AI

AI has been dominant in various fields such as −
  • Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can think of large number of possible positions based on heuristic knowledge.
  • Natural Language Processing − It is possible to interact with the computer that understands natural language spoken by humans.
  • Expert Systems − There are some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users.
  • Vision Systems − These systems understand, interpret, and comprehend visual input on the computer. For example,
    • A spying aeroplane takes photographs, which are used to figure out spatial information or map of the areas.
    • Doctors use clinical expert system to diagnose the patient.
    • Police use computer software that can recognize the face of criminal with the stored portrait made by forensic artist.
  • Speech Recognition − Some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meanings while a human talks to it. It can handle different accents, slang words, noise in the background, change in human’s noise due to cold, etc.
  • Handwriting Recognition − The handwriting recognition software reads the text written on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it into editable text.
  • Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors to detect physical data from the real world such as light, heat, temperature, movement, sound, bump, and pressure. They have efficient processors, multiple sensors and huge memory, to exhibit intelligence. In addition, they are capable of learning from their mistakes and they can adapt to the new environment.

History of AI

Here is the history of AI during 20th century −
YearMilestone / Innovation
1923
Karel Čapek play named “Rossum's Universal Robots” (RUR) opens in London, first use of the word "robot" in English.
1943
Foundations for neural networks laid.
1945
Isaac Asimov, a Columbia University alumni, coined the term Robotics.
1950
Alan Turing introduced Turing Test for evaluation of intelligence and published Computing Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a search.
1956
John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program at Carnegie Mellon University.
1958
John McCarthy invents LISP programming language for AI.
1964
Danny Bobrow's dissertation at MIT showed that computers can understand natural language well enough to solve algebra word problems correctly.
1965
Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in English.
1969
Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion, perception, and problem solving.
1973
The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot, capable of using vision to locate and assemble models.
1979
The first computer-controlled autonomous vehicle, Stanford Cart, was built.
1985
Harold Cohen created and demonstrated the drawing program, Aaron.
1990
Major advances in all areas of AI −
  • Significant demonstrations in machine learning
  • Case-based reasoning
  • Multi-agent planning
  • Scheduling
  • Data mining, Web Crawler
  • natural language understanding and translation
  • Vision, Virtual Reality
  • Games
1997
The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
2000
Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates meteorites.

Saturday, January 12, 2019

Honor View 20 India Pre-Bookings Open With Free Honor Sport Earphones on Offer

Honor View 20 India pre-bookings are now open via the Honor India official website.
Earlier on Friday, an Amazon India listing suggested that the pre-bookings
 will start from January 15, but at least the official Honor India website seems
to have jumped the gun. We already know the Honor View 20 will be officially
launched in India on January 29. For those unaware, Honor View 20 is the
 global variant of the Honor V20 launched in China last month. Its key highlight
 is the display hole for selfie camera, dual camera setup at the back, and
 a 48-megapixel sensor.
Honor View 20 pre-bookings are now open on the company's official store,
 while the Amazon Amazon listing still says that the pre-booking in India start
s from January 15 and go on till January 29. The company is offering free
 Honor Sport BT earphones to all customers that pre-book the device on either platform.
In order to avail this offer on Amazon India, head to the Honor View 20 pre-booking page
 after January 15, choose the denominator of Rs. 1,000, and purchase the gift card. 
The gift card will be emailed to the user, and on January 30 when the sale begins,
 the user can purchase the Honor View 20 from the same account as the gift 
card purchase. After the purcha
se, Amazon will then email the coupon code for the free Honor Sport BT earphones by
 February 15. Users can then grab the earphones for free from Amazon
 by entering the coupon code at checkout.
As for the HiHonor India store, the Huawei sub-brand Honor is also hosting Honor
 View 20 India pre-bookings on its own site, though it's already accepting pre-orders
like we mentioned earlier. The same Honor Bluetooth earphones is being offered
, while interested customers will have to pay the same Rs. 1,000 to pre-book via
the Honor View 20 pre-book coupon. The amount is redeemable against
 purchase post launch.
To recall, the Honor View 20 was originally unveiled in China in Decembe
r as the Honor V20. It is set for official launch in Paris on January 22.

Honor View 20 India price (expected)

Honor View 20 price in India looks set to be around Rs. 40,000. That represents
a bit of premium compared to the price in Honor's own country of the equivalent
Honor V20. The Honor V20 price starts at CNY 2,999 (roughly Rs. 30,400) for the
 6GB RAM + 128GB storage option. The price goes up to CNY 3,499 (roughly
Rs. 35,500) for the 8GB RAM + 128GB storage option and CNY 3,999 (roughly Rs. 40,600)
 for the Moschino
 Edition.

Honor View 20 specifications

Considering that the Honor View 20 is the global variant of the Honor V20, it is
 expected to pack identical specifications to those of the Chinese model. In terms
of the software, the View 20 runs Magic UI 2.0 based on Android 9 Pie.
The Magic UI is very similar to its predecessor the EMUI, apart from a new
colour scheme, real-time in-call voice translations, and more.
The phone will feature a 6.4-inch full-HD+ (1080x2310 pixels) display
 with 19.5:9 aspect ratio and 398PPI pixel density. It will be powered
by 7nm octa-core HiSilicon Kirin 980 SoC and include up to 8GB of RAM.
 The Honor smartphone's inbuilt storage will be 128GB or 256GB
, depending on the model. There is however no memory expansion slot in the phone.
The Honor View 20 features a dual-camera setup on the back with
 a 48-megapixel Sony IMX586 sensor with an f/1.8 aperture and a
secondary 3D Time of Flight (ToF) sensor. There is a 25-megapixel
front camera as well on the phone with f/2.0 aperture and fixed focus support.
SEE ARTICAL..MORE.....CLIKE

Black hacking

Not all hackers are inherently bad. When used in mainstream media, the word, “hacker,” is usually used in relation to cyber criminals, but ...