Petershausen, 15th October 2019

 

camLine to announce new LineWorks solutions at SEMICON Europa


Software Solutions for Manufacturing Excellence


camLine, a leading provider of software solutions for the semiconductor and microelectronics industry, will be announcing a new range of LineWorks products designed to optimise manufacturing excellence at this year’s SEMICON Europa from 12th-25th November in Munich. The company’s exhibition will be held at Stand 221 in Hall B1.

One of the products being showcased is the company’s new web-based add-on, LineWorks CDM (Chemicals and Gases Distribution Management). Designed for the LineWorks SPACE infrastructure solution for Statistical Process Control (SPC) and Quality Assurance in production, this new feature enables seamless tracking and tracing of delivered gases, chemicals, or other raw materials and all corresponding quality data to provision at the plants. It also enables semiconductor manufacturers to determine how these raw materials affect process and product quality in manufacturing at any point in the supply chain.

In addition, camLine will be showcasing its new LineWorks CQM (Customer Quality Management) add-on, which helps suppliers to efficiently generate electronic Certificates of Analysis (eCOA) which can automatically be sent directly to customers for approval. The delivery process is then automatically triggered by LineWorks CQM as soon as the customer has approved.

Finally, camLine will also demonstrate the LineWorks RM cross-factory Recipe Management System and how this can increase productivity by improving the integrity of product definitions, recipes, and/or manufacturing instructions, and ensure clearer identification of manufacturing processes.

Visitors to the stand will also have the opportunity to learn about the many benefits of camLine’s Process Development Execution System (PDES), XperiDesk, and the statistics software, Cornerstone – including the ability to optimise and accelerate R&D processes to bring products to market faster, comprehensive Statistical Process Control analysis, experimental designs (DoE), and the ability to streamline engineering processes.

Note to Editors

LineWorks CDM makes it possible to automatically trigger repeat orders before the inventory is exhausted. All types of transport systems such as drums, containers, etc. are provided with ID numbers (container, stack, equipment, location ID). The quality data required by LineWorks CDM comes from the LineWorks SQM (Supplier Quality Management) supply chain solution, which is based on camLine’s flagship product, LineWorks SPACE.

LineWorks SQM processes and manages electronic Certificates of Analysis (eCOA). In distributed supply networks, such management of eCOAs guarantees binding quality standards from suppliers (Tier1-n) to manufacturers and improves collaboration. Prior to delivery, the supplier – whether it’s a tier supplier or manufacturer – sends customers their collected quality data of the corresponding (semi-) finished products and materials (gases, chemicals) as eCOAs via B2B, e-mail, or a web portal in accordance with the customer's specifications. There, the data is immediately checked. As soon as it has been validated, the customer automatically issues the delivery release. This way, the customer only receives goods that meets their specifications and quality requirements – meaning improved cost savings and reduced Incoming Quality Control (IQC) checks. At the same time, efficiency in the supply chain is significantly increased and delivery reliability is ensured. Suppliers (Tier1-n) can also benefit from this process because they avoid expensive returns, and any unnecessary goods are generally eliminated, benefitting the environment.

LineWorks RM centrally manages product definitions in a single database which can be dynamically customized. The system offers alternative, powerful distribution strategies to meet a wide range of customer requirements. For example, the final manufacturing instruction, which can consist of 400 or more steps, can be generated automatically during download from the database. In times when product definitions are becoming increasingly complex - think of standards such as ISA88 or SEMI - this represents real added value.

 

Petershausen, 10th October 2019

 

Intuition in Manufacturing – Shining a Personal Light on (Dark) Data

The desire to automate has been one of the drivers behind the development of manufacturing since time immemorial. Humans have always worked hard to try and lessen the need to work hard, with each industrial revolution bringing increasingly sophisticated machines onto the manufacturing floor to take the strain for us.

Industry 4.0 is the latest revolution, introducing the concept of a 'Digital Twin' to the factory floor. Intelligent machines can create a virtual version of themselves, composed of the data they produce, which can direct process decisions based on performance parameters and AI networks. As our capabilities here improve, we can begin to reduce the number of employees on the frontline, automating more and more processes that once had to be manual.

With so many machines creating Digital Twins, big data is increasingly placed at the heart of business decisions in the modern manufacturing enterprise, informing strategy across every area of the manufacturing process. According to research from IDC, there will be 163 zettabytes of data by 2025; that’s 144,000,000,000,000 gigabytes!

Big data offers an opportunity to make serious, immediate improvements to the cost-effectiveness of manufacturing operations, raising the quality of manufacturing while reducing support costs. For example, data streams can track defects, conduct forecasting for the supply chain, and analyse machinery for maintenance needs.

However, for the best possible results we need to marry these insights and strategies with the personal intuition of the boots on the ground. Our desire to automate can’t marginalise the value of those on the manufacturing floor; we need to find the right balance between the benefits of automation and the intuition of our experts.

Character Building

The cliché that we could remove all of the staff from the factory floor is — within our lifetime at least — a ludicrous proposition. Industry 4.0 and big data are still in their relative infancies (the German government initiative from which Industry 4.0 takes its name was only publicised as recently as 2016), and we simply aren’t able to exercise the control over our machines that we’d need to make this a reality.

The tiny differences between supposedly identical machines are a good example of current limitations. Machines will be built to the same specifications but develop their own character – qualities like wear and tear, different combinations of replaced parts over time etc., that make every machine unique. Over time the performance of these machines will grow apart, and it’s difficult to modify data analyses to accurately compensate for these tiny disparities.

An operator who has worked with these machines as they’ve developed will recognise the differences intuitively, literally feeling the difference in qualities like noise and vibration. This operator’s annotations on a data sheet can often prove to be of more use, and more cost-effective, than digital analysis.

The Human Factor

Taking this one step further, it’s not just that data struggles to accommodate ‘identical’ machines. It’s also not very good at predicting the impact of people, and there will almost inevitably be human processes that impact the results of the data you’re receiving.

Most of these elements are down to human inconsistency – if you’re delayed in beginning a manufacturing cycle by two minutes every time, how many cycles do you lose per year? There’s also machinery in and around the manufacturing floor that doesn’t boast IoT connectivity but can impact the analysis process. For example, if you use the microwave for breakfast every morning and it disrupts a key Wi-Fi network, this makes a major and seemingly inexplicable impact on productivity.

You might counter that, if we continue to develop our data analysis capabilities, we could eventually track every possible metric on the manufacturing floor and finally have the data we need for full automation. This overlooks the fact that, even with our ‘limited’ capabilities at present, we’re already producing far more data than we can actually use.

Many manufacturers are recording dark data — data produced by operations or analysis, but not used — in the hope that will be beneficial at some future point. Others aren’t recording it at all, unaware that it could be of use. With more than 30 billion connected devices by 2020 according to IHS Markit, you can imagine the difficulty in leveraging the intimidating amount of data that this network will produce.

Individual intuition is, again, a major asset here. With those on the manufacturing floor able to help provide the context for their efforts, data scientists are able to ask the right questions of the data sets to hand, align the data of value, and clean up the results. Individual expertise thereby allows manufacturers to genuinely get the most out of the data they create and collect.

Ultimately, all of these examples illustrate the need for data to be managed by a human element that is close to the operational process, and not just from an observer’s perspective. The manufacturers who will benefit from the most insightful, cost-effective processes can only identify and establish them with the help of those immersed in said processes.

This article was published on www.emsnow.com »

Petershausen, 6th September 2019

 

InFrame Synapse Equipment Connector:
Reduce Costs up to 75 percent for IIoT-based machine integration

 

In order to successfully optimise manufacturing proccesses for future market success, it’s essential to embrace IIoT and Industry 4.0 through the connectivity of existing machinery. The camLine InFrame Synapse Equipment Connector (EQC) can help manufacturers to do this in a quick, cost-efficient manner.

 

The camLine InFrame Synapse Equipment Connector (EQC) is an innovative integration software solution that enables manufacturers to quickly and easily equip non-IIoT-capable sytems with an interface that can connect them to their current Manufacturing Execution System (MES). Thanks to its open interfaces, EQC can easily be linked with any MES, whether it’s a camLine product, or from any other provider in true plug and play style. In addition, camLine’s EQC is also able to communicate with all programmable logic controllers (PLCs). Plant manufacturers who implement standard-complaint or invididual interfaces for their equipment with EQC can also benefit and make them available at a low cost.

 

With the recent explosion of IIoT and Industry 4.0, manufacturers can no longer avoid connecting their entire machinery and integrating it into their existing MES. Through connectivity of machinery, manufacturers can gain complete visbility over their production processes. This enables them to further optimise their potential on the shop floor, increase productivity through successful tracking and tracing, and allows them to plan more efficently and strategically.

 

Making systems IIoT-capable via Plug&Play

There is a lot of catching up to do here. Machinery has adapted and improved over the years, but this means a majority of older systems do not have an IT interface or aren't IIoT compatible. The subsequent development and programming of data interfaces for connecting such machines to an MES is time-consuming and is most of the time, associated with high costs.

 

Up to 75 per cent lower costs

Since our interface is created using configuration instead of programming, this reduces costs enormously—in some cases up to 75 per cent. It also lowers the effort required when it comes to the interface implementation and the MES connection, a significant value-add.

 

camLine supports its customers through know-how based on more than 30 years of experience, which has also been incorporated into the developent of its EQC solution. As a member of the semiconductor and photovoltaic association SEMI, camLine was actively involved in the development of the relevant interface standards (SECS/GEM).

 

Read the whole article in German in IoT Wissen Kompakt 2019, p. 19 »

(IT & Production, TeDo-Verlag)

Petershausen, 22nd August 2019

 

Automation and Intuition: Big Data and the Human Side of Manufacturing

The fourth industrial revolution is well underway. Industry 4.0 has gone from being the name of a German government project, to an industry-wide trend encompassing end-to-end digitisation and data integration across whole enterprises.

Physical equipment can now have a 'Digital Twin' – a virtual representation of itself – which is able to inform predictions and subsequent processes. Automation is at an all-time high in terms of decision making and process control.

Subsequently, we have more data than ever with which to inform business decisions. From machines at the heart of the manufacturing process to incidental mechanisms like the supply chain or transportation, big data is providing the basis for better and quicker strategic decisions.

The potential that big data has to make operations more cost-effective is obvious. A 2017 survey from management consultancy McKinsey&Company suggested that the implementation of Big Data in manufacturing could boost pre-tax margins by 4-10%, enhancing everything from machine life to increased output.

The seemingly obvious conclusion — which is that you should push the benefits of big data to the maximum, quantifying and automating as much as you can — is not the case. The most effective enterprises will recognise the limitations of Industry 4.0 and continue to value the expert on the manufacturing floor, marrying individual intuition with automation.

Man vs machine

The cliché that automation can lead to the total removal of the engineer from the manufacturing floor is a pipedream, at least for our lifetime - we would need far more sophisticated AI mechanisms to make this a reality. The most effective digitalisation that we can implement right now remains at least partially reliant on the boots on the ground. No matter how many metrics at your disposal, there are always insights that human experience, expertise and intuition can offer that won’t be picked up by digital measurements.

For example, in virtually every line of manufacturing, the machines are unique; built to the same specifications but with tiny individual differences. Parts will have unique wear and tear, produce different sounds due to being in different areas of the factory floor, and so on. Big data drawn from these machines is not going to recognise these differences, which can lead to inexplicable differences in the data results.

If an operator has been working with an individual machine for long enough, he can feel whether or not a machine is working through vibrations, noises, appearance, etc. Data isn’t capable of replicating this or providing the context for it, and in many cases an operator’s annotations of a data sheet may offer greater insight than further digital analysis.

Utilising dark data

It’s also true that, even with modern data analysis techniques, the sheer volume of data that a manufacturer produces is too much to use. Dark data — data that you record and don’t use, or that isn’t recorded at all — can’t contribute to the insights that an analyst is trying to glean.

Many companies aren’t even aware of the dark data they store, whereas others simply log it and forget it until a point at which they can make use of it. Given that IBM estimates that 90% of the data generated by sensors and analogue-to-digital converters is never used, and that most companies only analyse 1% of the data, huge opportunities for further insight are being passed up by failing to utilise this resource.

Again, this is where human interpretation and intuition is capable of making the difference. Data scientists can offer an entirely new perspective, bringing light to dark data by reframing it in more accessible formats. They can ask the right questions, align the data of interest, and clean the results to make them more useful to decision makers; without human inputs to define the right context, you’re not going to maximise the utility of your data.

Finally, the unpredictability of human interference can also be difficult for data analytics alone to diagnose. The parameters of data analysis are limited to things directly related to a machine. They won’t, for example, explain how other human processes may disrupt things like performance, or even the analytics process itself – you’ll need to work that out for yourself.

For example, we have previously worked with an automotive manufacturer that found the wireless system used to underpin IoT communications on the manufacturing floor would regularly drop out during the same period every morning. The data showed the loss of connectivity, but it took human intervention to identify the problem; the network was disrupted every time an employee used the microwave to heat their breakfast!

All of these examples demonstrate the importance of the individual engineer, and the impact that they can have on the overall profitability of a manufacturing business. A talented individual is capable of filling in the gaps in our current data analysis; can make the most of the data that we fail to understand or use at all; and is capable of understanding the behaviour of his/her co-workers more than any machine. The manufacturers who want to run the most insightful and cost-effective operations cannot underestimate the influence that the individual can have on both profit margins and internal processes.

This article was published on www.eenewseurope.com »

Petershausen, 7th August 2019

 

Manufacturing misconceptions: The difficulties of tackling big data

The advent of Industry 4.0 offers huge potential to explore new and exciting changes to the manufacturing floor, comments Dirk Ortloff, Department Manager, camLine.

Intelligent machines capable of “speaking” to one another and collating a myriad of complex data promises huge improvements in productivity … and fundamental changes to the ways in which we view manufacturing efficiency.

However, the wealth of data available to manufacturing organisations is growing larger by the year, increasing the complexity of its analysis. Industrial equipment is becoming capable of storing and sharing types of data that were previously impossible, such as the capture of vibration data to contribute towards wear analysis, in increasingly intimidating volumes.

With the speed of development inherent in Industry 4.0 and the sheer volume of data at hand, many manufacturing organisations simply don’t have the know-how to handle big data storage and analysis. Facing data in more formats and higher volumes than ever before, it’s no surprise that they can be overwhelmed; it’s easy to miss the wood for the trees and fail to take full of advantage of the resources to hand.

To avoid missing out on the benefits of appropriate analysis, manufacturers are increasingly looking to in-vogue data analysis techniques to benefit from the most up-to-date procedures.

In-vogue inaccuracies

For example, it’s common for manufacturers to begin with a “data lake” to analyse all of the available data at once. On the surface, the logic is sound; the more data in your analysis, the more insight you can potentially receive. If you consider everything, you don’t omit a crucial outlier or an interesting correlation.

However, this is going to lead to performance issues. Larger data sets take far longer to analyse, especially if online analysis is part of the remit. A company in high-volume manufacturing may produce millions of units in the time it takes to analyse their operational data, only to discover that their processes are far less cost-effective then they thought. This can have a huge impact on the company’s cost margins and will reflect poorly on those responsible.

However, if a data lake approach fails to deliver the desired benefits, we often see people turn to a range of so-called cutting-edge techniques that threaten similar drawbacks if not deployed correctly. As trends in analytics come to the fore, promising results and new ideas can make people overexcited. But it’s easy to apply them inappropriately and end up with unusable, inefficient or misleading results.

For instance, if the data lake approach fails to work, many opt for the polar opposite: a “gathering and systemising” approach. This involves merging as many data bins as possible with a very strong emphasis towards systemising them — with data analysis only beginning once the bins have been systemised.

There’s a serious risk of falling off the other side of the horse here. In many cases, the systemisation doesn’t end, meaning that the data can’t be analysed. This makes it impossible to secure a quick win, with many organisations racking up high costs with no tangible benefit.

Another mistake that many make is opting to conduct data searches without a specific target. This inexpensive technique will select a bulk of data and use neural networks to search for anything interesting — standout results, repeating sequences, particular correlations, etc. This is often performed inexpensively by a trainee.

Without an appropriate data set, this will often lead to unsatisfactory results. It’s difficult to glean any valuable insight without a clearly defined goal; as the process tends to do away with method selection, the results are often far below expectations.

Determining direction

This all demonstrates how unwise it is for companies to commit to in-vogue analytics trends without a serious appraisal of use cases, methodology and the options available to them. It’s understandable to look at successful examples when attempting to find a solution, but data is far more complicated than that.

Attempting to emulate others without a grounding in the logic behind their decision will do more harm than good, particularly when it comes to adding value and cost-benefit ratios.

This is being recognised by even the highest powers, who are investing in education and data analysis applications. One example is the PRO-OPT research and development project, funded by the Federal Ministry for Economic Affairs and Energy of Germany. The PRO-OPT project looked to help companies operating in "smart ecosystems."

These ecosystems are immensely complex. Modern companies generating huge volumes of data will almost always have infrastructures spanning countries or even continents, as well as external partners to consider. With companies looking to outsource services such as manufacturing specialist parts, original equipment manufacturers (OEMs) are an example of specialist consultants who will invariably have a complex data infrastructure themselves, further complicating analysis.

Companies without experience in high-volume data analysis will find it extremely difficult to collate all of this data and properly investigate it. PRO-OPT aims to educate and support companies in the analysis of these huge volumes of data. The importance of its mission was recognised by backing from major German corporations including Audi and Fraunhofer IESE.

To examine one PRO-OPT use case, the project tested a wide variety of production data modelling approaches on the data of a leading automotive supplier. This exercise attempted to identify and demonstrate

  • the difficulties of systematically merging different data buckets
  • the possible modelling of the data in databases that are specifically designed to help analysts tackle large sets of distributed data
  • the actual analysis of these large data collections.

Success would mean being able to apply and compare statistically reliable analyses and classification procedures, as well as new procedures from AI instruments.

Sticking up the data bank

This use case stands out as it clarifies the challenges that companies with limited expertise in data analytics face. Without comprehensive preparation, an awareness of the options available and experience of executing them, organisations are inevitably going to hit unexpected roadblocks.

These can start before the process even begins. Securing a tool that could analyse and manipulate data is obviously very important; new technologies, or means of analysing data, have exciting potential. But when you have a new hammer, it’s easy to forget that some things aren’t nails. It’s crucial not to overlook reliable means of exercising control over the data you have.

Statistical process control (SPC) is a tried-and-tested means of doing so. Defined as “the use of statistical techniques to control a process or production method,” SPC was pioneered in the 1920s. The modern iteration is a technology that offers huge synergy with new data analytic techniques.

The ability to make vital production data available ad hoc, for example, or to automate actions to take place in real-time when certain parameters are met, make SPC an incredibly powerful tool through which to interact with data.

To get the most out of an SPC system, and to allow it to analyse and action changes based on data, it needs results to be loaded into a specialised database. The complex datasets required will often have thousands of variables, all of which need meaningful column names. Many databases don’t have the number of columns needed or a limit on the names you can give these columns — so how do you stop this seriously limiting your analysis capabilities?

Once the analysis is under way, does your organisation have the time and schedule to make it cost-effective? Most SPC solutions operate offline, analysing data retrospectively; only the most modern solutions are able to analyse online, in real-time. How do you balance the analysis you need with the manufacturing output that you require?

Beyond this, even if you can employ a database that can handle the volume of data you are working with and have completed the analysis process, data needs to be presented in a digestible way. Datasets with millions of data points can’t be displayed in conventional XY scatter points as they’ll almost completely fill the canvas — even transparent data points aren’t of any use. How do you go about translating a blob of X million data points into actionable insights?

These are just examples of the tip-of-the-iceberg-level thinking required to perform effective analysis, which goes to show just how careful analysts need to be. Without a considered roadmap of the journey that the data needs to take, and how analysts will both identify the data they need and then break down the results, it is all too easy to fail.

However, with the right equipment and ethos, analysis that used to be inefficient owing to the sheer volume of data can offer completely new insights into production data.

This article was published on www.manufacturingchemist.com »

Petershausen, 24th April 2018

 

camLine introduces version 7.1 of Cornerstone to the market

Novel engineering analytics for Lean Six Sigma

camLine GmbH, developer of software solutions for operational excellence, is introducing version 7.1 of Cornerstone. The new version of the engineering statistics solution for fast Design of Experiment and Data analytics introduces novel visualizations for even faster root-cause analysis and extended Big Data capabilities. Paired with various graphical improvements like alpha-blending, the newly offered methods and approaches extend the efficiency lead in engineering analytics. The capability spectrum of the software ranges from analytics application areas such as technical statistics, experimental planning (DoE) to explorative data analysis.


One major feature in Version 7.1 of Cornerstone is the novel visualization Multi-Category-Chart. This tool allows the simultaneous visual analysis of up to 100 categorical variables and provides the basis for a much more efficient root-cause analysis. Together with the new Multi-Vari-Charts the Six Sigma engineers receive tools to become much more effective in analyzing Big Data integrated via several new interfaces and methods. Improvements in the graphical representations through extended color palettes, true color support, anti-aliasing and alpha-blending foster the effectiveness in analyzing ever growing amounts of engineering data.

You can find more information about Cornerstone Version 7.1 here »

Petershausen, 8th of December 2017

 

camLine GmbH launches XperiDesk 5.4

Faster navigation through the experiment data cloud

camLine GmbH, developer of software solutions for manufacturing excellence, is launching XperiDesk (XD) 5.4. By streamlining existing and adding diverse functions, the new release strengthens XD’s leading position amoung Process Development Execution Systems (PDES).

The highlights of XD 5.4 enhancements immediately respond to current demands in the field: The users can use previous search results as selector for semantic relation searches, parameters can now be sorted into (multiple) directories and the manufacturability check evaluates calculated parameters. And last but not least, XD’s backend technology stack has been completely renewed giving it more performance, versatility and maintainability.

With the search result based relation searching, it has become much easier to iteratively sift through the experiment data. As previously the user can search for a set of items fulfilling his/her search criteria. On these results he/she can do a multi-select and based on the selected items start digging through the semantic experiment network. Because of this approach being able to be repeated, the user can easily navigate through the cloud of experiment results. Previously, this was only possible for a single item in the relation graph view. Another improvement is the extended functionality of sorting parameters into directories. This oftentimes requested feature improves the oversight in the previously flat parameter maintenance views and even allows to sort one parameter into multiple directories. This way it is easier to align people searching for a parameter from different perspectives. The manufacturability check has been extended as well. During the check in previous versions of the software, only the already flat numeric parameters were considered. With the extension in version 5.4, even parameters being calculated recursively out of other parameters are considered.

Other improvements are the seamless synchronization of Runs currently in production between the full and the operator client and the more versatile handling of regular expression handling in the MS Excel import client. Finally, it should be mentioned that XD 5.4 comes with various improvements with respect to performance and streamlining the user guidance and multi-organization usage. Throughout, camLine recommends the upgrade to XD 5.4 to existing customers.

Please find an introductory video about XperiDesk here.

Petershausen, 24th October 2017

 

camLine releases LineWorks RM 6.1

Now available with fine grained control on parameter level

 

camLine, a software company specialized in developing solutions for manufacturing excellence, announced the new Release 6.1 of its LineWorks RM recipe management system. It is a connected, IT-based infrastructure solution for the production with the intention to provide the overall and clear evidence of entire manufacturing processes for their unambiguous identification. For comparison, the IDs of both, salable products and manufacturing equipment, can be captured in many different ways and without effort. However, the unique registration of the integrated manufacturing processes and their changes is a special challenge for production sites. It can have far-reaching consequences for the manufacturers with respect to their future competitiveness. The management of process changes and their traceability is associated with many business processes. A remarkable potential lies in their efficient rationalization to reduce manufacturing costs.

Advanced features in the area of parameters and recipe body management, recipe object visualization as well as validation are available in the release of version 6.1. Furthermore, improvements have been made to the user interface. With the newly introduced parameter tags and options, the user has evolved capabilities to classify parameters based on their usage. Additionally it is possible to store parameter options at parameter level. These options are name-value pairs (additional data) stored per parameter. It encapsulates user-defined flags / options that are further interpreted by the customer's EI (Equipment Integration).

Customer specific validation rules can be defined in the recipe object validators. Via the recipe object templates it is possible to enable the validations in a controlled manner. This ensures a gradual deployment of the new validations.

The new version offers a flexible and consistent user interface to setup the custom recipe object and recipe body visualizations.

The end-user will benefit from the single sign-on capabilities of the improved user interface.

Furthermore, the administration of the users and notifications are consolidated into a unified web-based user interface.

 

Read more about LineWorks RM Version 6.1 »