How do new COTS MEMS vibration systems compare to traditional piezo-based systems?

Link to YouTube video
Piezoelectric vibration measurement systems (PVMS) are conventional equipment for condition monitoring but they are high cost due to equipment and cabling costs. Why don’t we use commercial-off-the-shelf (COTS) micro-electro-mechanical system (MEMS) vibration measurement system (VMS) as an alternative? These COTS-MEMS VMS have lower cost, no external power/communication cables, and are self-contained with the sensor, power and data storage in a single portable enclosure.

This work is one of very few to compare the performances of a wireless COTS-MEMS VMS, COTS-MEMS VMS with local storage only and a PVMS. Performance of all VMS is measured in terms of frequency spectrum comparison, noise floor, data loss, and use conditions such as mounting and data collection. Testing was conducted in a) a controlled environment using an isolated vibration shaker, and b) two industrial applications on a centrifugal fan and pump. Results show the COTS-MEMS VMS have comparable results with PVMS in the range of 5 Hz to 1.5 kHz sinusoidal input on a vibration shaker. The pump test demonstrated that the high noise floor on both the COTS-MEMS VMS prevented detection of operating frequency. An imbalance was induced in the fan to increase the amplitude at the operating frequency by an order of magnitude which both the COTS-MEMS VMS were able to detect. The wireless COTS-MEMS VMS suffers from data loss due to the use of the UDP protocol. Post-processing a sample with data loss causes spectral leakage and higher noise floor in the frequency spectrum and it must be avoided. Both COTS-MEMS VMS perform comparably with the PVMS when the vibration amplitude is discernible from its noise floor and when there are no losses in the data sample.
Author: Carlin Lapuz, Master of Professional Engineering (Mechanical) 2020

How difficult is it to fail a bearing?

Link to YouTube video
Rolling element bearings are in most of the rotating equipment that makes up the modern world. The downtime of rotating equipment due to bearing failure can be prevented by using sensor information to predict faults. However, a replicable bearing failure method is necessary for operators to interpret sensor data and reliably detect faults in industrial equipment. Bearing failure testing is conducted on small scale test rigs and often have a poor explanation of the experimental design. This study develops a method for inducing bearing failure on an industrial piece of equipment using lubrication and contamination of the bearing to induce failure in a short amount of time. The findings from this research show that failure can be induced through mixed grease contamination within one hour of testing. There are some great photos of vibration analysis resulting from this.

Author: Aseem Maroo, Masters of Professional Engineering (Mechanical) 2020

Open data – open innovation

Great talks at the Data Science week 2019 event at FLUX on Open data/ Open innovation.  Thanks to Daniel Cesar from Newcrest on orphans of innovation and other notions, Jess Robertson about Evergreen challenges and the power of consensus and Melinda Hodkiewicz on managing industry’s concerns about releasing data.  

 

UWA Living Lab Kicks Off

‘Innovators find it difficult to access sites to prove up results and miners are averse to trialling/introducing innovation without proven results’ -METS Ignited 2017

The UWA Living Lab project aims to help bridge this gap.

The Living Lab project funding was officially announced recently at the METS Ignited event at the CORE Innovation Hub. The launch funding is being supplied through the METS Ignited Collaboration Project Fund, from the BHP Fellowship for Engineering for Remote Operations. This is a partnership with CORE Innovation Hub and the UWA Facilities Management group.

More details are available on the UWA website.

Meta Project Thoughts

As engineers/scientists whenever we embark on a new project we are generally full of enthusiasm and excitement, and are raring to get going. This is fantastic, and this enthusiasm should not be damped, but it is worth while to take a few moments to think about how you should record/backup the work that you are doing.

Project Name

Give a few minutes thought to the name of your project. The project name will hopefully be something catchy and easy to remember, and will bring out the key features you are investigating. You should avoid any brand names/company names.

References

One of the first tasks when undertaking a new project is doing a literature review. This will generally generate a lot of references (these may be published articles, websites, books etc). To quickly and easily record all these it is recommended to use a reference cataloguing system such as Zotero.

Versioning

If a lot of your work is computer based (who’s isn’t these days?), then it is highly recommended to use some sort of version control system. The one with the most traction at the moment is called git. Git is certainly not easy and intuitive to use, but it is rather powerful. To make life a little easier, there is an online website called GitHub which removes some of the pain associated with using git.

Backing Up

It is very well to have a version control system to be able to review/recreate your work from any time in the past, but it is very important to realise that this is an archive and NOT a backup. At first glance, the version control system might seem like a backup, but it is simple to see the difference by thinking about what would happen if you wanted to look at an old version of a file, but discovered that the hard disk containing your repository had been corrupted. This old version would be gone forever, because it was only contained in the primary, active repository. This repository needs to be backed up. Another analogy is that of a state library. They contain many archives, but if the library burns down these are gone forever, UNLESS they have a copy (or back up) at some other location.

One of the very important criteria that I have discovered for a backup system is automation. If the backup system is not completely automatic then, in general, it simply will not be maintained.

If you are using GitHub rather than just git, you might decide that maintaining a cloud copy of your repository is sufficient backup. Otherwise, you could choose to back up to a local external hard-drive or use another cloud service.

Collaborating

A Google Docs folder can be useful for rapidly developing a list of resources/ideas for a project in collaboration with colleagues because it always for real-time co-authoring of a document, however there are a couple of issues to be wary of. Firstly, the contents of a Google Document does not exist ANYWHERE except on GOOGLE SERVERS or in closed-source GOOGLE APPS. I think that this is a major flaw/weakness in their system. There is no guarantee that you will be able to access this document in the future. You could forget/lose your account, google could cancel your account, somebody with whom you share the file could delete it. A possible solution to this is to continue to use google, but to periodically export the document to a docx/xlsx/pptx file, which can then be version controlled. Secondly, google drive does not work very well with git, and it is possible to break your repository if a folder is a git repository housed within a google drive directory.

There are other real-time collaboration tools available. If writing in LaTeX, these include ShareLaTeX and OverLeaf. Office 365 also has the ability to do real-time coauthoring, however I have not used this.

Look at your project from a variety of perspectives

Generally there is more than one way to think about any project. It can often be helpful to think about the project from a variety of different angles. For example, your project might use a variety of components so you might like to break your project down into the different components used. Your project will also generally require a variety of different skill-sets. You can divide your project up into the various skill-sets that you will need (e.g. coding, electronics, analysis).

FLOC – Machine Learning meets Formal Methods workshop, Oxford

This one-day workshop 13 July 2018 brought together the Machine Learning and Formal Methods communities. Here is a summary of some take-aways. Highlights were the talks by Pushmeet Kohli from Google DeepMind UK, Alison Lowndes from NVIDIA, and Adnan Darwiche UCLA. Melinda Hodkiewicz (SHL)and Ashwin D’Cruz (ex-SHL now working for Calipsa in London) attended. https://www.floc2018.org/summit-on-machine-learning/.

Pushmeet Kohli (Google DeepMind): Challenges for AI are to ensure it is a) robust to adversaries, b) generalises well to variations in the real world, c) it is fair, d) it is compliant with regulations. When talking about ‘fairness’, he split the discussion into the “What do we mean by fair?” and “How to make AI fair”. He did not answer the “what” question. Instead saying that “this needed to be set in regulations”.   As far as the “how”, Kohli suggested three steps 1) rigorous testing, 2) developing robust AI, and 3) verifying AI systems. A significant challenge for AI is that test set evaluation approaches commonly used in ML are inappropriate for 1) adversarial environments and 2) safety critical domains. In safety critical domains loss functions are unbounded. Also you would need a lot of samples of bad events for test set evaluation, and we can’t afford to do this. He then gave some examples of work his team at Google is doing (see his recent ICML, ICLR and NIPS papers) and left us with the idea that we might need a new language for AI which has suitable inductive bias (set of assumptions that the learning algorithm uses to predict outputs given inputs it has not encountered) and with the right expressiveness to describe what’s going on.

Comment: Melinda asked the room if there was anyone at the workshop working for legislators or regulators; there was not. It is not clear how legislators are going to develop workable regulations and regulators have the capacity to assess practice against these regulations without a good understanding of the issues being discussed at these types of events.

Alison Lowndes (NVIDIA): NVDIA has developed massive simulation platforms and Alison talked about their work on Jetson Xavier, an AI computer for autonomous machines delivering GPU workstation performance in a single embedded module https://developer.nvidia.com/jetson-xavier-devkit. She observed that while Reinforcement Learning was highly fashionable (80 papers/day published on arXiv), it is not commercial yet. Classical ML (SVM, MLP, GBDT) are still very relevant and widely used. Convolution neural networks are widely used. She expressed concern about the “common person’s voice in the room” and suggested that philosophy and psychology will become more important. Finally  relevant for the SHL and Makers she said that educational institutions can get a free DevKit from NVDIA https://developer.nvidia.com/teaching-kits

Andre Platzer (Carnegie Mellon) talked on ‘safe’ reinforcement learning via formal methods with a focus on safety critical systems. How do you demonstrate that the algorithm is “provably safe”? He talked about the need to 1) learn safety, 2) learn a safety policy, and 3) verify and about the issue of “what if the model is incorrect”? As far as safety policy, this appears to be based on the idea of “have we seen this output before and it was ok, then it should be safe, given the same context”, how do we know to trust this and how do we know if the context has changed?   Andre runs the Logical Systems Labs http://www.ls.cs.cmu.edu/ and has a text book on Logical Foundations of Cyber-Physical Systems.

Sumit Gulwani from Microsoft talked abotu their PROSE kit https://microsoft.github.io/prose/. This is about programming by examples – the automatic generation of programs from input-output examples. It can build programs in various languages such as Python/R/C# and some of its functionality is baked in Excel. Some scripts that are now tedious to write can be automated.

Marta Kwiatkowska (Oxford) talked about her team’s work on ‘safety verification for deep learning networks with proven guarantees’. She made the point that while there are an infinite set of possible outcomes we only measure ‘accuracy’ on a finite data set.  She demonstrated how deep learning networks are unstable to adversarial perturbations using image processing of a car sign (plenty of papers on this on arxiv) and asked ‘how can we verify that such behaviour cannot occur’.  Marta’s group at Oxford is involved in modelling and automated verification techniques for software systems. One of the current projects in her group is safety and trust for mobile autonomous robots.   http://www.cs.ox.ac.uk/people/marta.kwiatkowska/research.html.

Adnan Darwiche from UCLA presented on “what just happened in AI”.  It draws on his recent paper “Human-Level Intelligence or Animal-Like Abilities” https://arxiv.org/abs/1707.04327.  He made a number of interesting observations 1) Lots on new AI applications, 2) AI has been around >50 years, 3) The AI curriculum is almost unchanged. Essentially every behaviour can be captured to some extent by a function. We are now building bigger functions and we have more data.  A deep learning NN is a function and architecting the structure of a NN is function engineering.  Next he moved onto how our perception of value has changed. Model-based approaches try and understand a system whereas ML models translate without insight. We have realised in many cases (e.g. social media) you don’t need the understanding to be useful. The ease at which we can get results that may be the same or only slightly better than what we can do with model based methods is very attractive.  However he warned about the growing gap between hype and reality and reminded the audience of a period he described as the “AI winter”. He warned about a lost generation of AI researchers who are well versed in NN models but not in Logic, the need to understand the limitations of function-based approaches and to characterise deep learning functions in a scientifically precise manner. It’s worth reading his paper from the link above for a fully discussion of his concerns.

 

 

Learning from digital humanities for the Siri for Maintenance project

This week Melinda talked with Dr. Beatrice Alex  who is in computational linguistics at the University of Edinburgh. I learned a lot about when and how to include subject matter experts in the NLP and semantic object identification pipeline. We can take some of the lessons she learned in their digital humanities project into our Siri for Maintenance project. Beatrice’s homepage is http://homepages.inf.ed.ac.uk/balex/

Attaching the BlueBox to rotating machinery

Today the SHL managed to get a hold of an old and broken record player.

After disassembling the record player we discovered that the fault was a deteriorated elastic belt. We made do with what was available and replaced the belt with a piece of string. Whilst the record player was open we also replaced the wires running to the 12V DC motor with wires leading to a DC power supply unit. This allowed us to control the spin of the record player by varying the supplied voltage – the record player no longer just runs at either 33 or 45 revolutions per minute. We were then able to place a couple of BlueBoxes on the record player and get live data measuring the acceleration caused by the rotation of the disc bed.

Project Scope

The initial focus for this project will be in understanding and designing a circuit able to transmit a wireless signal with a low input voltage. Using a blog article on a simple scavenger ring as a starting point, it will be the team’s responsibility to analysis, modify and improve on the basic functioning ring that was created. This will be achieved by extensive use of literature and other research to create a much more extensive and functioning circuit which has been designed with a more comprehensive level of examination. Once completed, a secondary circuit will need to be designed to harvest a particular source of energy to power the circuit. The options available for this project include peltier, piezo electric, electromagnetic and magnetic coil. For this part of the project the optimisation and testing of the allocated energy source, used to power the sensor will be the main component of the thesis. The overall result will hopefully be a small device that can emit a signal from small amounts of energy harvested from the particular source.

Use of microcontrollers and MEMS for condition monitoring.

The traditional methods for high frequency data analysis involve many piezoelectric sensors, signal amplifiers and spectrum analysers the size of a modern day desktop computer. However, with the arrival of low cost and easy to use microcontrollers and Micro Electromechanical Systems (MEMS), perhaps it’s time to reconsider some of the traditional methods. This blog post aims to outline some of the challenges, limitations and success encountered whilst trying to use a combination of microcontroller and MEMS devices for condition monitoring.

The equipment:

  • Arduino MEGA 2560: . The large user community along with extensive libraries available make this an ideal choice for experimentation.
  • ADXL345: The ADXL345 is a small, thin, low power, 3-axis accelerometer with a measurement range of upto ±16 g.

 

ADXL345 (Left) & PCB. Coin for scale.
ADXL345 (Left) & PCB. Coin for scale.

Thanks to the advances in rapid prototyping, it is incredibly easy to produce PCB’s in low quantities whilst keeping cost to a minimum. We had designed and manufactured our own PCBs to support this project. The PCB houses the accelerometer, a temperature sensor, a voltage and current sensor and a various other electronics. This test setup cost us <$100.

 

 

 

 

 

Challenge #1: Data storage:

The Arduino lacks an onboard flash storage. Therefore it requires an external storage space to store the data. An SD card shield is the simplest solution to overcome this problem. This method however, has one big disadvantage and that is the speed at which data can be written to an SD Card. Every time the Arduino has new data to be written to the SD Card, a file in the SD Card has to be opened, the data written and then closed. Failure to close it at the end of each data entry may result in corrupt data. The process of writing to an SD Card is slow and is subsequently one of the biggest  limitations to recording high frequency data.

Solution: Buffer memory

The Arduino has 512kb of Random Access Memory (RAM) which has a much higher read/write speed than any external flash storage. So by writing to this memory, then transferring it in chunks to an external flash storage at once, we are able to record data at much higher frequencies. However, the measly 512kb RAM posses severe limitations to the number of data points that can be obtained. We are limited to around 500 data points before the the RAM fills up and  requires to be transferred to the SD Card. The Arduino is also incapable of collecting any new data whilst this transfer is in progress. This means that the vibration data is not continuous and can only be obtained in small chunks. There are methods to increase the RAM of the Arduino using external RAM modules, which would enable us to increase our sample size significantly. But that is a different challenge on it’s own, and one we will hopefully tackle in the near future.

 

Challenge #2: Obtaining consistent data 

This issue is most relevant for vibrational data.

The ADXL345 supports a maximum Output Data Rate (ODR) of 3200hz on SPI and 1600hz on I2c. Our test rig is currently configured to use I2C. To understand this challenge, some knowledge of the Arduino’s operation is required: The Arduino continuously carries out a set of instructions in a conditional loop. It can be configured to request data from it’s sensors each time the loop is executed. The time taken to complete the instructions inside the loop is very inconsistent and therefore, the time interval between each data point can vary. This inconsistency can severely affect some time sensitive data such as vibrational data. Without a fixed ODR, the vibration data is essentially useless.

Solution: FIFO Buffer

Luckily, one of the ADXL345 chip’s distinguishing features, is the inclusion of a First In First Out (FIFO) buffer. The chip continuously collects data at a fixed rate, stores it in it’s FIFO buffer, and triggers a watermark when the FIFO buffer is full.  The Arduino can be configured to continuously scan for this watermark and transfer the data from the FIFO buffer onto it’s RAM upon the watermark’s signal. This guarantees a fixed, consistent ODR.  Ofcourse, this only works because the Arduino can poll the accelerometer at a faster rate than the accelerometer is collecting data.

 

Challenge #3: Accuracy

Coming soon: We are testing this and will update shortly.

Praveen Sundaram

Research Assistant

System Health Lab, UWA