Today’s Approach to Testing Has No Future – Part I
So here we have a reality in which new technologies, trends and software are becoming more and more complex, which of course means that the work of a tester is much more demanding. And if we add the ubiquitous agile approach in which the tester creates the product along with the rest of the team, and thus must have sufficiently broad programming skills (or rather, we would like it to be so), it gets really hard.
Let’s not kid ourselves – the role of software testers is still underestimated. Despite efforts to build awareness of the essence of testing, there are still companies that would prefer to avoid the costs involved. Some people may think this is a “prehistoric” approach, but we can still come across the following opinions:
- tests do not bring measurable benefits,
- tests make programmers lazy,
- what’s the point of testing, if the developer should program everything as he is supposed to?
This is obviously a short-sighted approach, as in reality these arguments can be reduced to two aspects, namely money and a desire to reduce costs. In addition, unfortunately, the testing process itself is designed in such a way that it does not actually provide enough added value to the project. And this is not necessarily the fault of the testers themselves.
Testing – how it looks in practice
Let’s take a look at a typical project. We have certain needs in terms of performing tests in the course of a given project, yet they are not fixed, but are often highly varied and subject to change over time. Of course, at the beginning of the project, we try to anticipate and manage such different eventualities appropriately, but we must always assume that debt will arise sooner or later, resulting from the simple dissemination of possibilities in relation to the resulting needs.
Let’s assume that we delegate two people to our abovementioned project. What does it mean? That we have two testers who have a given amount of time, specific knowledge and experience. Even if they are very experienced testers who are able to perform various types of tests such as manual testing, automation, performance tests, safety tests, API testing, static testing, we can be sure that they are not specialists in every area. So they are able to provide us with a defined quality. In other words, it may turn out that we have very good manual testers, who are just learning how to carry out performance tests, so the quality of these tests will be somewhat uncertain.
This means that, having finite resources, we will never be able to provide the highest quality tests in the context of all needs which emerge – even if our testers give their best efforts and work or study after hours. And what about when the work of testers is no longer needed? After all, we still pay for the chairs occupied by the testers – which of course means that resources are squandered.
Figure 1. Needs with regards to the delivered tests. The blue dotted line indicates the work done, the shaded blue area the values that cover the needs of the project, and the unshaded area represents the areas not covered by testing.
We must also consider the issue of tools. The fact that we have specific tools at our disposal does not mean that they are the best. Typically, the choice of tools resembles the way we choose the bakery in which we buy our bread rolls every day. The best rolls are not actually the best ones in the whole city, but those that were good enough to stop us looking around for another bakery. Bread rolls are not always our priority, just as testing is not a priority for many software companies. Because they do not specialize in testing, they rarely have knowledge of the latest, optimal solutions, and it is difficult to require them to have a wide range of such solutions at their command.
Priority and specialization are the key words in this case, because software development companies will not usually treat testing as the core of their activity. Rather, programming is. So maybe such a company should look for a suitable technology partner, for whom testing is the priority?
When working in a system where our testers have to meet very different requirements, we will always have problems with appropriate test coverage. Such a system not only provides inefficient tests, but also produces employees who are “masters of nothing”, testers who can do “a little of this, a little of that”. The problem is that even with the best of intentions and predispositions, these testers face the “average ability” trap.
So let’s stop kidding ourselves. Good quality automated tests will not be performed by a tester who deals with them only sporadically, because every day they deal with manual tests, or sometimes performance tests. Automated testing requires a specialized tester who will devote more time not only to preparing testing scripts, but also to maintaining them, refactoring, etc. On the other hand, maintaining a dedicated tester in the team to perform tests every four months is not financially viable.
Safety and efficiency are also good examples. We do these tests every so often, so having a full-time specialist tester is hard to justify, at least from an economic point of view. So what if we want our tests to be performed by an experienced specialist who will deliver the highest quality tests? The solution to this problem seems to be services: internal or external.
To be continued…
Data Analysis Shaping Business – Interview with Samsung’s Andrzej Czechowski
What are the main goals which you set in terms of business analytics?
Andrzej Czechowski: At this point, all of the companies we work with, regardless of the industry, are gradually becoming analytical and IT companies. As Jeff Immelt, the CEO of General Electric, once said, we are waking up to a new reality and can no longer run a business without business analytics. So we spend more and more time analyzing data and trends, because it makes it much easier to make decisions. And because of that, these decisions become more accurate, and we can avoid a lot of mistakes, thanks to an analysis of the past, and the knowledge of various phenomena that can be observed this way. If you want to run a business, you have to know the numbers. Samsung’s strategy here might not be unique, but we most certainly would like to be a mature company and aware of these numbers, the ways in which we want to develop, competition, and internal and external conditions.
Companies are increasingly becoming aware of the potential which data has, but most still associate it with Excel spreadsheets. How do modern business analytics work for such a traditional approach to data?
Andrzej Czechowski: Here are some trends that I myself also try to reinforce and implement. First, we can see that visualization is becoming the language of analytics. Good visualization, meaning that it is consistent and facilitates the so-called drill down, that is, delving deep into an analytical area, is a very useful tool for contemporary analytics. On the other hand, there is also self-service BI that allows you to discern a lot from the data yourself, without the need to send endless inquiries to IT teams, and without the need to have a professional-level knowledge of databases. At the moment, analysts are on hand for almost anyone, which is as it should be. And the next thing which we can observe is the change in technology itself. This means that data can already be read in real-time, servers are ever stronger, and cloud solutions are available, so technologically advanced analytical solutions are much more widely available than ever before. All that shows that business analysis itself has changed greatly.
In terms of visualization, you use Qlik products – I have QlikView and Qlik Sense in mind here. What do you think is the greatest value of these tools?
Andrzej Czechowski: We use many tools, including self-service data integration. We integrate data earlier so that it is easily accessible in the visualization layer. Everything is managed in such a way that the data is always available and well-visualized. With visualization tools, the data can be read well and I have to admit that there is a lot of pleasure to be derived from using all these possibilities. This makes it very fast and easy to independently set a variety of different dimensions. They can of course also be set by default, e.g. through bookmarks, meaning the constant reports which we use, but the greatest value lies in the fact that you can select a given area and view it from various perspectives and explore it deeply, to understand this data better.
How do employees utilize these tools? Are the primary end users of these visualizations top management, or can every employee use these tools at their operational level?
Andrzej Czechowski: Here, the aspect of customized analysis is very important at every level of access. Data should be available, but not everything for everyone. A Product Manager has different needs than an Account Manager, and the needs of a division head or president are different still. The level of knowledge, the level of information that a person would like to have, varies in scope and in detail. As a result, we also have a variety of tools to meet these needs.
Is there a need for employees to ‘drill down’? Are these tools sufficient? Do they make suggestions for how to change things?
Andrzej Czechowski: Habits are an important aspect of it. And of course you have to change these habits if it can bring benefits for your organization. On the other hand, from experience, I know that you should not change visualization tools too often. The user is used to them, knows them well, and it is precisely the use of proven, well-known tools that produces results and improves efficiency. Of course, new visualizations are still emerging, and there are plenty of additional features we add, such as campaign analysis and marketing promotions, which also require good visualization, making it much easier to understand and plan. Yet there are some elements that we created some time ago, let’s say a year or two ago, which work all the time, look good and should not be changed. As I said, business is growing, new areas are emerging, and it’s there where we add new functionalities.
Which areas of business do you most benefit from using these tools in? Of course, we know that they should be used everywhere, but which areas are fastest to see discernible results?
Andrzej Czechowski: As you mentioned, all areas of the company can benefit by analyzing data and gaining information, but the biggest beneficiary is usually the sales department, as there they need to analyze their activities on a daily, mainly quantitative basis. The Product Manager will also see results quickly, for example, if they analyze the price from the point of view of how market trends have changed and add an analysis of market and marketing trends to their data, such as market research agency data. Another such area is Supply Chain Management, where you have to deliver the product optimally, so you also need the relevant reports. On the other hand, the managers of teams and divisions are an important group of beneficiaries, because they must constantly analyze and review key KPIs from their own point of view.
And the areas that still have some developing to do in this respect? Those that still don’t take advantage of these opportunities 100%?
Andrzej Czechowski: In general, analytics are still developing in all areas. On the other hand, the one in which it is particularly visible, in our company too, is the digital sphere and above all e-commerce. I see the potential for a truly huge revolution in this area, from an analytical standpoint as well. Because we have more and more data here. These areas are still learning how to operate. Opportunities and needs in these terms will certainly grow. Soft information is becoming increasingly important, meaning what the customer thinks, likes, dislikes, an analysis of what is happening on discussion forums or vlogs. It certainly opens up vast possibilities for text analysis and analytics in general.
What do you think are the most important technological trends in Business Intelligence?
Andrzej Czechowski: Mobility is definitely at the forefront. Let’s admit that most of Samsung’s tools, such as phones, are already small computers. The mobility of analytical solutions goes hand-in-hand with the development of the devices we use, which have ever-increasing amounts of computing power, virtually unlimited resources. The devices that we carry with us become the centers of management and it is certainly clear that there are almost no analytical solutions that would not be suitable for mobile devices. Analytics are becoming more and more popular. At the moment you can have a really good, really cheap application that can handle most analytical challenges.
On the other hand, more advanced tools are also emerging. Self-learning systems are becoming popular and offer even greater opportunities, such as reliable sales forecasts and anticipation of sales trends. Cognitive systems are generally a highly fashionable and media-driven topic, sparking the imagination, because we can already see that cars are slowly starting to drive themselves. Which raises the question: where is it heading? Will it be like “2001: A Space Odyssey” where ‘Hal’ takes over the business? One thing is certain, cognitive systems are developing, and decision-making is increasingly automated because it is certainly more reliable than any analyst. And of course, technology will take over more areas of our lives and business, so more and more data on these activities will be analyzed. We can see it today, for example in cloud services, and IoT. It’s already happening.
Could you tell us more about the Samsung Business Consulting service?
Andrzej Czechowski: At Samsung we have seen everything mentioned here, the strength of analytical solutions, and how they change the organization. And since Samsung cooperates very closely with B2B partners, the concept of supporting our partners from the analytical side arose, so as to share the experience we have gained with them. This is a “practitioners for practitioners” format. We have gained experience, we see the pros, cons, and we know which solutions work and how quickly they can be implemented in the organization. For us it is also a very big advantage because firstly, we are building a long-term business relationship with a partner, and secondly it is also a much simpler platform of communication. It’s highly convenient to work on the same set of numbers. We go to our partner and talk about e.g. sales data, trends in the market, and we see the same numbers because we work on the same data systems and our knowledge is convergent. Only 18% of larger companies in the Polish market have a mature BI system. So we see that the demand for such systems, as well as the knowledge of how to use them, is great – among our partners too. This is why the Samsung Business Consulting concept, which will support them not only through tools but also through analytical solutions services, was created.
You cooperate with JCommerce in the areas of Custom Development and BI. Could you tell us more about these joint projects?
Andrzej Czechowski: We have been cooperating with JCommerce for several years now – I think that this cooperation is really effective and I hope we can continue to cooperate. We always set ourselves specific requirements and goals which are achieved, and long may it continue.
OK, that’s all the questions I have. Thank you very much, it was great to hear your insights.
Andrzej Czechowski: Thank you very much.
Outsourcing – Does Remotely Mean Worse?
One of the most common indications of a lack of trust is a reluctance to cooperate remotely. In my opinion this is a somewhat paradoxical phenomenon. We live in a time of technological development unprecedented throughout the history of civilization. We can conduct a video chat from Munich or London with a cousin from Australia, or show our parents in Poland photos of our trips from Canada, or study via courses taught by lecturers in the United States without leaving home in Manchester. Moreover, doctors who are hundreds of kilometers away from the patient can consult and even perform surgical procedures.
So why would a potential customer, on hearing that a developer could work for him remotely, respond as if someone was trying to get him to dive head-first into an empty swimming pool?
Where does this need for control, which may be illusory in practice, come from? Why assume that remote tasks must involve lower quality, as well as security risks? The paradox is that often the same person shares his private network and confidential matters in good faith that no one will use this data against him, yet prefers traditional forms of carrying out projects at work. But technology can carry a threat in both our private and professional lives, because eavesdropping, tracking, or data theft are just as commonplace as the positive examples mentioned earlier. Technology is ethically neutral, but the way it is used is determined by people. Security is also a product of the technologies, rules and good practices used, both privately and professionally. The same is true both remotely and on-site.
But returning to the topic at hand – an unwillingness to work remotely is most often explained by the following arguments:
- integration of the team is necessary,
- real-time communication between team members is required,
- workers are physically present at headquarters out of habit,
- management is unwilling to run such risks,
- it has been tried, but didn’t work out,
- we don’t want to lose control of security,
- it’s important that people are connected to us.
But let’s look at the facts. Around 70-80% of developers at JCommerce work remotely for clients. Interestingly, these clients are mostly foreign companies: from Switzerland, Germany, and Scandinavia. How is it that cooperation proves successful, despite the tyranny of distance, the rarity of face-to-face encounters (due to the optimization of flight and accommodation costs) and the need to communicate in a foreign language? Communication works smoothly, the quality of service is high, and problems are solved as and when they arise.
It is also important that the customer does not have to take care of office space, furniture and equipment. Security, which is of vital importance, is ensured through the appropriate management of access and permissions (in accordance with previously agreed procedures). In some cases, the client passes equipment on to the developer, including the computers they administer according to the company’s standards. Remote work is generally not a security risk, at least no more than on-site work.
Finally, we come to the so-called ‘soft’ issues, such as the problem of employee integration. The brutal truth is that if the project is badly run and such factors as chaos, lack of documentation, lack of consistency, or a bad working atmosphere arise, then even the physical presence of team members in the office will not change anything. Developers, whether they are permanently employed or just on a contract basis, will look for other options, and will soon find them in today’s business climate. Conversely, a good atmosphere and efficient communication within the project can also be achieved from a distance, merely by using the appropriate tools and techniques of team management. The conclusion is that the integration of workers and the possible problem of rotation within the project are not in any way dependent on geography.
In summary, it is worth knowingly leaving one’s comfort zone, abandoning the stereotypical limits to development, and exploiting the benefits of remote work, which would mean:
- no office overheads or cost of commuting,
- a broader choice of specialists (we have developers from many cities, even countries, at our disposal),
- better quality work – here the phenomenon of scale comes into play, i.e. an employee of the service provider, working remotely, can more easily and quickly consult on the problem with equally or even more experienced colleagues from his or other teams,
- a good IT service provider will also have no issues with allowing the customer to test the remote work option in practice (e.g. a trial period with the possibility of terminating without notice and then proposing a different specialist).
The Industry 4.0 – New Generation of Smart Factories
The contemporary equivalent to Detroit in the 1990s is the economy of Germany, which is still based on industrial production, despite the digital revolution. Here, however, the similarities end as, unlike their American counterparts, the German economy is in a very good state, with no sign of any change in the near future. No wonder, because it was here that the idea of Industry 4.0, the fourth industrial revolution, was born. It may sound somewhat bombastic, but it is worth taking a closer look at specific projects and solutions, as these are what make the German economy one of the most competitive in the world, notwithstanding the growing demographic problems associated with an aging population and ever fewer people of working age.
The Fourth Industrial Revolution
In brief, the first industrial revolution was based on mechanical production driven by the steam engine; the next revolution was mass production and electrification; while the third was the introduction of integrated circuits that allow automation into factories. As mentioned above, Detroit is the symbol of the exhaustion of the economic model created by these phases of evolution in industry. In the case of the fourth revolution, the driving force is the network, but in a much wider sense than the Internet itself:
- social networks, signifying the development of the network economy, based on a network of partnerships between companies, co-operation, and business networks,
- the Internet of Things – a key element of so-called smart factories, that operate based on a network of interconnected production machines equipped with sensors, readers and recorders that collect data and regulate the production process,
- the Internet of services – on the one hand, this means cloud services; on the other hand, the specialization and outsourcing of services, using a partner network, which is also possible thanks to the use of modern communication technologies, enabling work to be done remotely,
- the Internet of data – the use of both one’s own and external data as a resource to ensure the competitiveness of your enterprise. It also includes phenomena such as mass customization, or mass production including parameters identified by specific customers, as well as fog computing, the transient state between the local server and the cloud, and completely new ways of communicating – both within the company and with customers and business partners.
The Fourth Industrial Revolution has the potential to address some of the most pressing contemporary issues facing industry, and more generally the economy as a whole, such as innovation and productivity problems, demographic problems, and the necessity of developing industries that are ecologically sound, economically efficient, highly innovative, and bring more added value than the service sector, but are also less susceptible to economic crises (reindustrialization).
Paradoxically, Industry 4.0 is also a response to social unrest in highly developed countries caused by relocating production to countries where manufacturing is cheaper. It turns out that this phenomenon, which is typical of globalization, is losing steam. While in the United States this is simply Donald Trump’s unfulfilled campaign promise, many of today’s leading German companies, such as Adidas and Stihl, are bringing production back to Europe. But what has brought this on?
Offshoring production is becoming less and less profitable (due to rising labor costs in Asia, which is also the result of globalization), and does not ensure swift order fulfilment, which is especially important in industries such as fashion. The ever shorter fashion cycles (the days of having two seasons, spring-summer and autumn-winter, are already a thing of the past, as we now have at least eight seasons), as well as mass customization (Adidas has long been offering customers the chance to customize their own shoes online), mean that production is obliged to move closer to the customer. But production is usually much more expensive there. So robotics is the solution, and the German and Swiss markets are leading the way.
Bystronic and smart factory solutions
For many readers, the theory of the fourth industrial revolution may sound naive, like another idealistic economic stimulus project, full of murky definitions and unclear guidelines. Therefore, it is worth giving an example of how it can work in practice. To do this, I will use the example of the Swiss company Bystronic, with whom I have the opportunity to cooperate in person.
Bystronic offers advanced systems and services for industrial processing, including the cutting of various materials by means of laser and water jet, as well as sheet bending (for the automotive and shipbuilding industries). The equipment produced and used by the company requires specialized software that ensures the highly precise processing of materials. To that end, the company decided to establish a partnership with JCommerce, entrusting them with software development.
“The demand by many users for automation and digital process solutions is increasing. This trend is being intensified by impulses from the field of Industry 4.0, which are changing also the world of sheet metal processing. Software plays a key role in this transformation. Software solutions support users in the planning, interlinking, monitoring, and optimizing of all their processes. In cooperation JCommerce and Bystronic are working on new software solutions, in order to support customers within a world of automated and networked manufacturing”, Bystronic says.
JCommerce specialists are currently implementing software projects to support Bystronic by developing new solutions in the field of Industry 4.0.
Clearly, the strategy of Bystronic is the idea of Industry 4.0 in practice, meaning that the company is aware of the potential of information that can be obtained using industrial machines, connected to a common data processing system, in the IoT model. It also takes advantage of the opportunities that a network of trusted business partners brings, carrying out tasks more effectively thanks to its specialized nature. Bystronic remains a company in the industrial sector, it is still a factory above all – although a smart version – because it utilizes advanced IT technologies to a great extent.
Donald Trump and Business Analytics – Trends in BI in 2017
In a broader sense – not just in political terms – this could be due to the phenomenon of data-pollution. Just as cities are suffocating in smog, virtual reality is suffocating from too much information. Experts from Qlik say that this phenomenon is so severe that it will come to define technological trends in the coming years, just as with business. Certainly this is the case in the field of Business Intelligence. 2017 will be the beginning of the fight against data illiteracy, which is the process of spreading the skill of “reading data”, its analysis, verification and selection. Other trends for 2017 are Big Insights, business intelligence based on context, and the increasing use of data analysis tools by employees at all levels.
It is estimated that by 2018, 80% of data stored will be completely useless, with neither the possibility nor sense of processing it. This is directly related to the abovementioned phenomenon of data-pollution. Infrastructure for data storage is cheap and widely available, so companies are producing an increasing number of bytes – unfortunately, their value is questionable at best. The collection of such data is often art for art’s sake, without purpose and strategy, just a vague idea that it may prove useful sometime down the track. The result is that even information which is important to a company often dies in the black hole which is the database. Such a situation fails to facilitate the wider use of IoT, which is the Internet of Things. Like every great idea, which originally was to serve the good of humanity (economical and ecological houses or cities, the comfort and convenience of senior citizens and people with disabilities, etc.), the Internet of Things is becoming a caricature of itself. The Internet can be connected to absolutely everything from the kettle to the cat’s litter tray, collecting terabytes of completely useless data. Wired magazine mentions that the ironic term the Internet of Shits is ever more popular – which basically means the imminent death of ideas, at least in their present-day, gadget-like form.
Big Insights and data visualization based on context
Everything points to the fact that the coming years will mark the end of the Big Data fetish and the beginning of Big Insights, which is a critical approach to the data being processed. And there will be more and more of this data, which will be more nuanced. Expanded reality and IoT will bring about the contextualization of data in the real world, which will enable the capture of specific events (our actions, decisions, and behavior) in a particular place and time. And this will further blur the boundary between the physical and virtual worlds. The game Pokemon Go is just one such example. This also means that business analytics will need to exceed this limit.
Data analysis must be based on an ever wider context. Otherwise, the company runs the risk of operating in a virtual bubble. A similar phenomenon is now being observed by social networking researchers, who have noticed that their users operate in an environment of friends who are similar to each other, with access to selected information served to them depending on the choices made (the number of likes) and calculated by preference algorithms. This is the so-called filter bubble. Of course, the image of reality which thus arises is false, distorted, and is also harmful in many respects because it means that our choices influence the shape and content of the information presented to us. For business this situation is equally dangerous: a company functioning in the business reality created by the paradigm of their own data is on the direct route to being isolated from the expectations of customers, the situation on the market and, of course, to financial disaster. By the way, it is completely unaware of this danger – because of course it uses the most modern IT solutions. The conclusion is obvious, it is not enough to analyze their own data – it is ever more important to confront this data with external data and take that into account in the decision-making process. Even if – and perhaps especially – when such data makes us uncomfortable and disturbs our comfortable perspective.
The democratization of data analysis
On the one hand, we must decide what data to collect, but on the other hand, we must learn to read the data. In companies it will mean the dissemination of tools for business intelligence. But what does this mean exactly? Well, access to advanced analytical tools can no longer be reserved exclusively for top-level executives. Access must also be granted to all employees, who can more effectively carry out their tasks thanks to the use of data. Not only that – analytical initiatives (i.e. how and what is to be analyzed) must be bottom-up, because every employee knows their area of operation best and knows what data is most useful. An employee also adds his own input, a unique perspective, which significantly reduces the risk of enclosing decision-makers in a virtual “filter bubble” distorting the image of reality. Companies must therefore develop a new complex ecosystem of Data – People – Ideas. The IT department must be at the center of this, and must be equal to the task in terms of the provision of relevant data and the mechanisms for processing it.
This is obviously a much more complicated task than simply implementing the appropriate Business Intelligence tools – entrepreneurs should in fact change the operation of their businesses, focus on the education (training) of employees in the framework of acquiring, analyzing and using data on the job. Data analysis soon ceases to be a narrow specialization for IT people only, but becomes a key competence of every employee, regardless of his position – it can be considered on a par with language skills and ability to work in a group in one’s CV, without which employment in a modern company is practically impossible.
Skanska about Cooperation with JCommerce – Interview with Péter Béres
530,000 m2 of office space completed, 170,000 m2 under construction and 320,000 m2 planned, according to data from 2015 for Central and Eastern Europe. This makes a total of over one million square meters of existing and planned offices spread over several cities. Managing such a rich portfolio must be a huge challenge?
Indeed, we realized that without a sophisticated controlling application it is not feasible to prepare high quality timeliness forecasts, as we are not only dealing with huge areas in terms of square meters, but the hundreds of assumptions behind every project. In our business we need to look a minimum of 5 years ahead in every forecasting period for each project, preparing several scenarios with different assumptions and combining such project scenarios in different project plan versions, resulting in huge complex calculations which are almost impossible to follow in Excel. Shifting forecasting into such an application clearly saved time without compromising on quality.
How did cooperation between Skanska and JCommerce come about?
Our cooperation started off with a project to optimize the ERP Microsoft Dynamics AX system in the Polish branch of the company, Skanska Property Poland. The system which was previously in use within the company no longer met expectations, and so, in 2012, JCommerce specialists became involved in the development of the system and its adaptation to the needs of the company. After this initial modernization of the system, JCommerce also proposed the creation of a Web application using .Net technology, which has enabled our users to enter and view data without licensing restrictions to access. This solution was highly popular, and from there new ideas came up for additional modules, such as Rent Roll, a list of lease agreements, and Gain on Sales Calculation, which is a tool for recording the profits associated with these projects.
The system implemented at the Polish division of our organization by JCommerce was presented in such form during the CEE Commercial Property Development meeting, and received wide acclaim. The decision was therefore taken to extend its operation to other markets in the region and integrate it with our internal Korab II system. JCommerce specialists have also assumed responsibility for those stages of the project.
What is Korab II?
Korab II is our own ERP system, which Skanska uses in the CEE (Central / Eastern Europe) countries. Its integration with the solution created by JCommerce enables the flow of data between systems, both in terms of vocabulary data, such as country names or service codes, and financial data. Changes made in one system will be automatically updated in the other.
What were the main problems which you wanted to solve?
Before the implementation of the system we had huge issues with the consistency of the data: the lack of a unified data structure, different units, different currencies and different formulas for calculating the same parameters, depending on the methodology in use in that country. In individual countries, the same parameters or indicators often differed greatly from one another, due to legal, tax, or economic differences. As a result, the company had problems with standardizing indicators that are vital for development project management, such as the market value of the project, the value of land, costs, inflation, or lease profitability.
The situation was further complicated by the fact that data was entered into a number of different systems simultaneously, and was therefore often different – errors came up, and we sorely lacked a singular database for analytical processes. The preparation of reports was highly onerous under such conditions.
Can you give us the main advantages of the system and the benefits which have resulted from its implementation?
The system records all leases, rents, discounts, and annual profits, using the data warehouse. With a significantly improved speed of data entry, users do not have to enter data in different databases; we have direct access to current information and transparent reports. With the web application data being available in all locations at the same time, information is consistent and orderly, without errors resulting from incorrect data entry. Scalability is also a considerable advantage, as is the ease of adaptation to the increased needs of our company, thanks to the technologies used as well as licensing models.
Are you able to name any key performance indicators which you have managed to improve by implementing the system? Can you give us any specific numbers?
I can’t talk about financial KPIs, but can confirm that we have already reduced the reporting consolidation process by one day, which we can “give back” to the operating countries so they now have an extra day before the deadline of their reporting process. It results in better quality that can be seen in the number of questions that need to be asked and the number of revisions that any of the countries need to submit. We foresee that the forthcoming new modules which connect the application directly with HFM, the consolidation tool used by Skanska worldwide, will result in a similar saving of one extra day in 2017.
Skanska and JCommerce continue to work together on the development of the system. What is the current priority?
At this point we are working on the addition of other data grids which we would like to process and analyze; and we plan to extend the range of functionality of the system, for example the Land Valuation tool which analyzes changes in the reported book value of land compared to market value. We also plan to connect up our new CRM system and are considering storing some external market data.
What is the future of the system?
The system has been presented in all of Skanska’s business regions, meaning Scandinavia, the UK, and the United States, and there is a desire to roll out the system in all locations. This is obviously another big challenge that requires the adaptation of the system to local conditions, such as different accounting systems, or changing the metric system to the imperial system in the United Kingdom and the United States. But before that happens, we want to tidy up all processes and implement all planned functionalities in the system currently operating in the CEE region, using it as something of a pilot study.
The Future of New Technologies
During the inauguration lecture of our hackathon, you said that over the next 20 years the world will change more than it did over the previous 200 years. If so, then for sure a lot of professions that we know today will disappear. So which do you think will carry on or will develop anew? Which profession is the best choice, for ourselves or our children?
First, you need to have an open mind, there is no point deluding oneself that common patterns will work in the future. I think everything will change ever faster. Perhaps those young people who have come here today to take part in the workshops and competition represent the last generation for whom driving will be a common skill. For the next generation it might just be a hobby like horseback riding for us, and most people will have cars which will drive themselves to the destination point. And that means you have to be very open to change. The most valuable professions, the most prospective, will be all the jobs that are associated with data – that’s for sure. So the job of a statistician, an analyst, but also an engineer. There will also be some professions which are strictly related to the humanities, such as a psychologist, who will lend us an ear, or the arts, so the professions which either cannot be or which we will not want to be computerized.
So then which jobs will become obsolete?
All those in which humans had to learn some rules and now apply them, such as, for example, primary care physician, lawyer and translator. These are professions in which computers have now taken over from humans. How many of us are treated by Dr. Google? In the future, instead of blindly pestering Google, we will have a computer equipped with artificial intelligence, which actually studied medicine which means that it learned on real cases – what the symptoms are, what the causes are, what happens to a person during illness, and how to treat it. And such a computer, just as it plays chess today, will someday be able to diagnose patients – perhaps even more effectively than a human doctor.
So apart from programmers, not many of us can sleep soundly?
Not necessarily, because on the other hand, there are also jobs that can be computerized, but have not been. Some time ago I worked on a project for a garbage truck that gets around without a garbage man or a driver. And indeed several such garbage trucks were produced. It was a pretty good prototype, actually went round in the morning without human involvement, and the computer and sensors controlled the mechanical arm which gathered up the trash, and the garbage truck continued on. But in the end the project was abandoned. The arm didn’t always manage to pick up the trash, or the garbage bins were not always in the right place, or someone had forgotten to put them out, or they were hidden somewhere. Probably it could have been figured out, but in the end it was definitely cheaper to just hire people to do the job. And it is quite distinctive. Once upon a time in fantasy books or science-fiction movies, it was machines or robots that twisted the screws and did the worst jobs, and people functioned as managers. But now it turns out that it is often quite the opposite. In shopping centers, logistics centers and loading bays, the computer tells the worker where to go, and a man with a speaker in his ear hears: six steps to the left, two steps to the right, the third shelf, raise your hands … These roles can be completely reversed.
It’s a bit like in the Matrix…
It could turn out in one of many ways, as computers are very good at making decisions and, increasingly often, we let them. For example, in the recent high-profile case of driverless cars, which from time to time will have to decide, for example, whether to kill a passenger or a pedestrian who forced his way onto the road.
So the most important roles will be played by the programmers, right?
Well, not quite. Because artificial intelligence is not programmed. At least not in the sense that there are sets of rules that we have input, and now the machine must abide by them. No. It works like this: at the beginning we put some data into the machine. Information, figures, some content. For example, a model of artificial intelligence, which was asked to write an essay, had previously learned the content from Wikipedia. As it read, it received the command: contribute your opinion. There were no rules which would have regulated this process in advance.
Well, but if we do not implement rules at the start, we may completely lose control of the process. As with the example of the bot which was supposed to learn how to interact with Twitter users. The result was that it turned into a racist Hitler-lover, a ‘hater’ to all around, and had to be switched off … And this driverless car which we talked about, based purely on data, would probably sacrifice the person who was older, in worse physical condition, or of less importance to society. Because such is the logic of data. But it is socially unacceptable.
I think that it will. Because data changes the way of thinking, the paradigm is changing. Today we have your beliefs, your views, so in science we have hypotheses. After they have been constructed we set out to verify them, we draw some conclusions and get to work. In this model which I am talking about, there is no initial hypothesis. Only data – text, a number, a picture. Then there’s the model, meaning that we have learned something from the data, first came the abstraction, then the generalization, we have some rules, but the rules are derived from the data. And only with these rules are we able to draw any conclusions. And because there is so much data, everything is somehow connected to each other, this model turns out to be not so terrible. We can tell it: learn from this data, then verify it using different data. And then something arises, which was missing in the approach based on one’s belief system: it conflicts with empirical evidence. This way I can very easily judge whether I was right or not.
I don’t know if you know, but it has been estimated that as many as 70-90% of scientific papers, especially in the field of medicine, are falsified. The conclusions drawn are simply untrue. Why? Because someone had their hypothesis, perhaps even somehow reached it objectively, for example, he had a group of patients, he found something noteworthy. On this basis, he developed a hypothesis, then generalized it to all of us. But without the support of the data. Because really the result came first, and the data was adjusted to fit later. Because the data can easily be juggled, if we already have our conclusions before we even start.
Okay. But people have a tendency to impose rules. Just as medicine has bioethics, which prohibits certain tests and treatments, purely because of convictions, there may be a need for such regulation in computer science, the creation of infoethics …
I don’t know what it would look like, but it would be very interesting. Perhaps it will turn out this way, but at this stage, however, it is still science fiction.
Well, yes, because we are talking about artificial intelligence, which is also still just science fiction. But now let’s focus on something that is more real. The analysis of Big Data. Is it ethical to analyze all the data on network activity, payments, GPS positions? Theoretically we are able to connect all this data to a particular person, say, a Social Security number and know almost everything about him, even the most personal, intimate details.
That’s true. And it actually happened some time ago. There’s a book, Dataclysm, or data cataclysm, about how much computers know about us. The book was written by an American who ran an online dating site. It’s a specific kind of site where you can lie about certain issues, but you can’t, for example, lie about your preferences, because you want to meet a person who you find interesting, not the opposite. So there are aspects of your privacy which you have to tell the truth about, and yet these are things that you wouldn’t want to read about yourself in the paper. It’s amazing how much people are willing to tell you about themselves after all. The analysis of such data allows us to build a complete profile. As being off-line is slowly becoming a kind of luxury, so privacy is already a luxury nowadays.
That’s why we’re starting to see regulations come into effect. The European Union ratified the ‘right to be forgotten’ last year – so these are the first steps to giving us back the rights to our data.
That’s true. But on the other hand, we know that the Americans, the Russians, the Chinese and others are also listening to and recording all the telephone calls in the world, all emails, anything you have ever said or written. There are people who analyze and archive it for some reason. The technology already allows it – the storage and processing of data is now so cheap that governments are able to do it. Being anonymous is really a luxury, but it seems to me that people don’t really want it. They are able to put a lot about themselves out there online. It’s true that if we want to enjoy the benefits of the Internet, we need to share data. The system needs to know about us. And that’s okay. However, in using our data, we must take into account the benefits and drawbacks. We sell our data for tangible benefits. And the problem lies in the fact that in reality we sell data for below its true value. Or even for a song.
Okay. We’ve gone off topic a bit. Since there are so many unknowns and so many threats, what decisions about the future can we make to minimize this risk, even a little? Even if only in a professional sense, where to start?
The key word is data. There’s more and more and there’s going to be more still. We generate it, devices generate it, and soon we will process it on an even greater, unprecedented scale. So, people who deal with this data, plus those who have an open mind, will be increasingly indispensable. Anyway, for now there is a lack of them, good professionals are always lacking in these developing fields. In a year’s time they’ll be lacking even more, and in two years more still.
But won’t it be another ‘golden direction’? Twenty years ago, parents dreamed that their children would become doctors or lawyers. Nowadays management is most fashionable. And everybody has a tertiary degree. And now they have nowhere to work. Isn’t it going to be the same all over again? What if a machine ends up taking the place of this analyst?
That’s true. After twenty years it might actually turn out that these professions related to information technology which we know today will no longer exist. Today, however, universities are not able to turn out as many graduates as the labor market needs. Information technology today is changing the world and drives development in its entirety. It changes the scientific approach; it affects all areas of the economy and human activity. Twenty years is not all that short-term a perspective. It seems to me that during these next twenty years, the outlook in terms of IT specialists won’t worsen. And what comes next? After that, we simply don’t know, nobody can predict what will happen.
Poland Rules 2016’s IT reports!
For several years now, our country has been one of the most attractive markets for foreign direct investment and a leader in terms of the number of new jobs created. According to Ernst & Young analysts, this year Poland is 5th in Europe in this respect, after economic powerhouses such as Germany, the UK, France, and the Netherlands. What are the reasons behind the level of investor interest in Poland? The high competence of Polish specialists is probably the driving force.
Polish developers are slowly becoming the Polish economy’s most valuable brand. HackerRank compared the programming competences of specialists from 50 countries around the world and ranked the Poles as high as 3rd place, after only the Chinese and the Russians, but leaving the Germans, Britons and even Americans far behind. Moreover, when competences in terms of particular specializations were compared, Polish Java programmers took 1st place!
Small wonder, then, that international business service centers are popping up like mushrooms in Poland – in the A.T. Kearney Global Services Location Index ranking, Poland is the most popular location for such centers in Europe. Due to its low cost, high professional competences and the favorable business environment, Poland is also an IT services outsourcing power, and has been recognized as the most attractive location for nearshore outsourcing in Europe according to the Raconteur report.
All reports available here:
Outsourcing After Brexit – a Central / Eastern European View
Let’s start with what we have to lose. Piotr Zyguła, CEO of JCommerce SA, is moderately pessimistic. “The share of profits from the UK market in terms of the total export earnings of our company is about 7%, so any problems with maintaining this figure will not significantly affect the financial position of JCommerce. However, in recent years this share of earnings has consistently increased, and we saw further cooperation agreements as an opportunity to build a strong position on Western markets. For our employees, who of course are key stakeholders, it’s an opportunity to work on interesting international projects. It would be hard to give all that up.”
In theory, not much will change in the near future. Until completion of the “divorce” from the EU, which will probably take a few years, the United Kingdom remains a member of the Union and all parties are obliged to abide by the existing rules. In practice, however, they may be “lost years” because Brexit is inherently associated with great uncertainty about the future form of relations between the EU and the UK, which in turn has a negative effect on the markets and can stifle business relations, which do not take kindly to risk. Among other things, it is why EU officials have already called on the British government to begin the Brexit process.
The strength of the pound to date has made IT outsourcing to the countries of continental Europe, especially Central and Eastern Europe, as well as to Asia, very profitable for the British. Brexit brought about a sell-off of the pound, while the dollar, the euro and the Swiss franc became relatively more expensive. The cheap pound makes services abroad, including outsourcing, more expensive. The pound is also cheaper in relation to the Chinese yuan and Indian rupee (both are popular markets for outsourcing IT services). In our region of Europe the countries that stand to lose most of all are those that have adopted the Euro, such as the Baltic countries, Slovakia and Slovenia.
What does all this mean for Poland? “Just like the currency of Hungary, and the Czech Republic, the zloty is getting cheaper. Paradoxically, these problems act to stabilize the position of domestic outsourcing companies – a cheaper currency allows you to remain competitive. Outsourcing in Estonia, Slovenia, India, and China is more expensive because of the cheaper pound, so Poland is becoming more attractive for British business partners. The only question is whether the mood associated with Brexit will lead them to avoid cooperating with us?”– wonders Piotr Zyguła.
Life after Brexit – the new legal reality
Some of the major advantages of outsourcing IT services to other countries of the European Union for British companies were the similar legal systems and the universality of EU standards. British companies collaborating with business partners – for example from Poland – can count on the same treatment as Polish companies, so they can claim damages without major problems, as guaranteed by EU law.
After the UK leaves the EU, depending on the model of further cooperation, the systems might become more and more different. Piotr Zyguła expresses his doubts: “Will the United Kingdom continue to participate in the single EU market, which implies the free movement of goods, capital, services and workers? If so, to what extent, if not, what barriers will arise, and how much will they cost? In this context, will we be able to remain competitive?” Business abhors a vacuum, so sooner or later, both sides will be forced to find new business partners. But will it bring them increased benefits? And how many companies will go under in the meantime? It is difficult to estimate at this point.
A weaker union, a weaker market
The outlook for the outsourcing industry could be adversely affected by a potential economic slowdown. Some estimates say that Britain could lose up to 5% of its GDP within the first few years, during the process of its exit from the Union. On the other hand, the economy of the Community will also suffer, although the effects will be spread more evenly throughout the individual member states. The EU budget also stands to suffer losses, which will mean fewer resources to support innovation and new technologies, which will probably affect the entire IT industry, indirectly at least. Years of uncertainty, falling investor confidence and – most likely – price increases will probably reflect negatively on the level of IT investment, both in the UK and other European countries. A domino effect will probably arise that could affect Polish companies as well.
Will a Polish plumber replace a fellow Pole?
Perhaps, however, these problems will not dissuade British companies from outsourcing, especially if it turns out that the lack of suitable staff will begin to further strangle the British economy. After leaving the EU, the British labor market may be (although not necessarily) closed or restricted. This doesn’t just affect the proverbial Polish plumbers, as it will also complicate matters for the IT industry, and as a result the number of vacancies for engineers may increase. During this year’s London Technology Week, analysts predicted that about 850,000 more IT specialists will be needed in Europe by 2020, of which 180,000 will be required in the UK alone. One may have doubts as to whether these specialists will be found on the local market, which is already saturated and which is already straining under the weight of a lack of manpower (not only in IT, but in other industries as well). This can lead to an increase in the salaries of specialists on the local market, and de facto push British companies to take advantage of outsourcing to a greater extent, in order to fill staff vacancies.
The British view
Brexit itself is of course not universally popular with the British people, 48% of whom voted Remain. Andrew Kirby, a teacher for Dynamic English in Katowice, Poland, which has been co-operating with JCommerce for three years now, expresses uncertainty about how the British decision will affect his countrymen, having voted by proxy in the referendum. “It is scary to think that 1.3 million people” – the difference between the number of Leave and Remain voters – “can determine the fate of not just our country, but the entire continent of 500 million people.” However Kirby stresses that nobody really knows at this stage just what the effects will be.
Andy Gillin, CEO of Dynamic English, is also unsure of what to expect. “Nobody knows what’s going to happen, that’s what people are afraid of. I don’t think Brexit will be an easy process, but all we can do is hope that business is not affected too dramatically. Perhaps it could even bring about some unforeseen opportunities in business – we’ll see! But we just don’t know.”
The coming years will see great uncertainty and an unpredictable level of risk. The IT outsourcing industry will have to learn how to operate under such conditions. As we have seen, Brexit involves significant risks, but also brings opportunities for development. Some companies can run into trouble, but those which are most flexible and ready to take risks may turn this situation to their advantage – as usually happens in times of crisis. So what can be done today? I guess – along with the rest of the world – we can only look at what is happening in Downing Street and keep an eye on developments.
Thanks to the dynamic development, increasing knowledge and skills of the employees and also use of the newest technologies, we are able to provide more sophisticated ideas. International experience in implementation of QlikView, Microsoft BI and Liferay tools guarantees our strong position on Scandinavian market.
Currently, we co-operate with our Finnish partner and plan another project concerning Business Intelligence solutions. It will be customized for the producer of medical devices in Copenhagen. There is a possibility, that in the same time we begin other project in Oslo, the capital of Norway.
Now we are highly engaged in the one of the most famous running events in Poland – the Silesia Marathon, where we act as a technological partner and also we have our own representation. Moreover, our specialists created the mobile app on iOS, Android and Windows Phone system, which supports competitors while running.