Apple unveiled new technologies it said will make it easier and faster for developers to create powerful new apps at its WWDC 2019 event.

Showcasing its SwiftUI development framework, Craig Federighi, SVP of software engineering, said it “transforms user interface creation by automating large portions of the process and providing real-time previews of how UI code looks and behaves in-app”.

The company said using simple, easy-to-understand declarative code, developers can create full-featured user interfaces with smooth animations. This saves time by providing a “huge amount of automatic functionality”, including interface layout, Dark Mode, Accessibility, right-to-left language support and internationalisation.

SwiftUI is compatible with iOS, iPadOS, macOS, watchOS and tvOS.

A new graphical UI design tool built into Xcode 11 makes it easier for developers to quickly assemble a user interface with Swift UI, without writing any code. SwiftUI code is automatically generated and, when it’s modified, changes to the UI instantly appear in the visual design tool.

The company said the tools’ ability to fluidly move between graphical design and writing code makes UI development “more fun and efficient”, and also makes it possible for developers and designers to collaborate more closely.

AR features
Apple said its ARKit 3 technology “puts people at the centre of AR”. With motion capture, developers can integrate people’s movement into their app and, with people occlusion, AR content will display in front of or behind a person to deliver a more immersive AR experience and green-screen like applications.

ARKit 3 enables the front camera to track up to three faces, along with simultaneous front and back camera use. Collaborative sessions are available, making it faster to jump into a shared AR experience.

The company also touted RealityKit, “built from the ground up for AR”, featuring photorealistic rendering, “incredible environment mapping”, and camera effects such as noise and motion blur.

RealityKit can be accessed via a new RealityKit Swift API.

Also debuted was Reality Composer, a new app for iOS, iPadOS and Mac enabling developers to prototype and produce AR experiences with no prior 3D experience. With a drag-and-drop interface and library of high-quality 3D objects and animations, developers can place, move and rotate objects to assemble an AR experience that can be integrated into an app in Xcode, or exported to AR Quick Look.

Mac crossover
Apple unveiled new tools and APIs to make it easier to bring iPad apps to Mac computers.

With Xcode, developers can open an existing iPad project and “check a single box” to automatically add fundamental Mac and windowing features, and adapt elements including keyboard and mouse.

Mac and iPad apps also share the same project and source code, so any changes made translate to both, saving developers time and resources. Apple said this provides a “huge head start” on building a native Mac version of an app.

Machine Learning
A feature dubbed Core ML 3 enables acceleration of more types of advanced, real-time ML modes. With more than 100 model layers now supported, apps can deliver software that understands vision, natural language and speech. And for the first time, developers can update ML models on-device using model personalisation, a move the company said protects user privacy.

In addition, with Create ML, a dedicated app for ML development, creators can build models without writing code. Multiple-model training with different datasets can be used with new features including object detection, activity and sound classification.

Apple Watch
The introduction of watchOS 6 and the ability to access the App Store directly from Apple Watch means developers can now build, design and distribute apps for the wearable which work independently.

Developers can also use Apple Neural Engine on Apple Watch Series 4 using Core ML which, along with on-device interpretation of inputs, gives users more intelligent apps.

A new streaming audio API means user can listen straight from Apple Watch and an extended runtime API gives apps additional time to accomplish tasks while in the foreground even if the screen turns off, including access to sensors measuring heart rate, location and motion.