This assessment helps us understand how you already use AI, how technically confident you are, what speed you work at, and what kind of support will actually help you. We use it to shape workshop delivery, follow-up resources, and the right next step inside Melbourne AI Hub.
Frequency, tools, prompting, verification, and practical use.
Files, spreadsheets, automations, APIs, and build confidence.
Desktop typing test when available, plus checklist and time capacity.
You will get placed into a clearer content track instead of being treated like a generic attendee. That lets us pace the room better, spot potential builders and facilitators, and route people toward the right offers after the workshop.
Results are stored to help shape content, follow-up, and membership/product pathways. We are measuring readiness so we can support people more intelligently, not to exclude them.
It is a signal layer for the workshop, member pipeline, and follow-up services. The same structure can be used for attendees, member applicants, and future facilitators such as Olu and Stephan.
We can tell whether the room needs more basics, more implementation detail, or more hands-on support.
We can segment who needs replay resources, who wants done-with-you help, and who is ready for membership or pilot work.
The same scoring helps us identify who can support future sessions, build in public, or own a vertical such as energy and environment.
Take five to eight minutes. The more honestly you answer, the better we can shape support.