Portfolio · Edition 01 · 2026
Solutions Architect Track · 5+ Years · Cloud · Identity · Network · Endpoint · Automation

Jonathan Ikporukpo.

Five years across cloud, identity, network, and endpoint — designing pragmatic, multi-vendor architectures and extending into AI automation for regulated enterprise environments.

Crawley, UK  ·  Open to senior IT, infrastructure, and solutions architect roles
Scroll
00

About

Five years of hands-on IT engineering across cloud, identity, network, and endpoint domains. Currently extending into AI automation and solutions architecture, with practical experience designing, supporting, and securing hybrid environments in regulated enterprise contexts.

At HCLTech, I support enterprise-scale Microsoft environments — Azure and Azure Active Directory for identity and access, Microsoft Intune for endpoint management, SharePoint Online for collaboration, and Cisco infrastructure underneath. The engagements are ITIL-aligned, SLA-driven, and span both cloud-native and hybrid estates.

Prior to HCLTech, I provided technical service for the SHL cloud-based talent assessment platform, supporting recruiters, administrators, and candidates across multiple regions. Earlier work at SystemSpecs in Lagos covered Cisco switch administration, enterprise security controls, and disaster recovery for a financial software provider.

I work vendor-pragmatically rather than vendor-exclusively. Microsoft is the current centre of gravity, but the patterns travel — identity, network, endpoint, and observability look similar across AWS, Google Workspace, and SaaS estates I’ve touched. AI automation is the area I’m actively building in: Power Automate, Copilot Studio, and lightweight Python scripting against ServiceNow and Microsoft Graph to deflect repeatable work and accelerate triage.

Educationally, I hold an M.Sc. in Civil Engineering from the University of Ibadan and a Cisco Networking qualification from Emtech in Dubai. The engineering background shapes how I approach IT problems: structural thinking, root-cause discipline, and a preference for documented systems over heroics.

What I optimize for, in order:

  1. 01 Architectural fit — right tool, right layer, right cost
  2. 02 Root cause over recurrence
  3. 03 Secure defaults at identity and endpoint
  4. 04 Documentation that survives a handover
Capability Map

Domains I architect, support, and integrate across.

A vendor-pragmatic stack. Microsoft is the centre of gravity; the patterns travel across AWS, Google Workspace, and SaaS estates I’ve touched.

Service
& Governance
ITIL · ITSM · SLA · Change · Audit Incident · Problem · Request · Change Advisory · ServiceNow / JIRA
Capability
Domains
Identity Azure AD · Conditional Access · MFA
Endpoint Intune · Workspace ONE · Autopilot
Productivity M365 · SharePoint · Teams
Network Cisco · OSPF · BGP · MPLS
Cross-Cutting
AI & Workflow Automation Power Automate · Copilot Studio · Microsoft Graph · Python
Security & Observability Least privilege · audit logging · telemetry on quiet failures
Cloud
Foundation
Microsoft Azure · AWS · Google Workspace Azure as primary; AWS & GWS as working knowledge
01

Experience

Three engagements, one consistent throughline: ownership across cloud, network, and endpoint domains in environments where uptime, compliance, and end-user productivity are non-negotiable.

Dec 2022 — Present United Kingdom
HCLTech Enterprise IT Services

Senior IT Support Analyst

End-user and application support across cloud and on-premise enterprise environments. Administer Microsoft Azure, Azure AD, and Intune for identity, access, and device management. Support SharePoint Online, Microsoft 365, and Windows 10/11. Assist with Cisco switch troubleshooting, VLAN connectivity, and network incident investigation across OSPF, BGP, MPLS, and EIGRP. Manage incidents and requests through ITSM tooling within ITIL-aligned SLAs.

  • Azure & Intune
  • M365 & SharePoint Online
  • Cisco & Network Triage
  • AI Automation (in progress)
May 2022 — Nov 2022 United Kingdom
SHL Talent Assessment SaaS

Customer Excellence & Technical Service Engineer

Provided technical support for the SHL cloud-based recruitment and assessment platform. Resolved user access, configuration, and system performance issues for recruiters, administrators, and candidates. Investigated incidents within SLA targets and escalated complex defects to engineering with reproducible test cases. Maintained ticket documentation and customer-facing communication.

  • SaaS Platform Support
  • Triage & Reproducers
  • SLA Delivery
Sep 2014 — Aug 2015 Lagos, Nigeria
SystemSpecs Limited Financial Software & Payments

Tech Support & Enterprise Security Engineer

Supported enterprise networks, systems, and applications. Diagnosed and resolved hardware, software, and network issues. Performed scheduled backups and disaster recovery operations. Assisted with Cisco switch administration and connectivity troubleshooting. Implemented baseline network security controls.

  • Cisco Switching
  • Backup & DR
  • Network Security
02

Selected Work

Two architectural case studies — endpoint management plane consolidation and multi-site network incident response — followed by a forward-looking pattern in AI-assisted operations. Where customer or estate specifics are confidential, examples are labelled representative and reflect typical scope rather than a single engagement.

  1. 01 Migrating an enterprise fleet from AirWatch to Intune Endpoint Architecture · Conditional Access
  2. 02 Multi-site network instability — root cause and remediation Network Architecture · OSPF · BGP · MPLS
  3. 03 AI-assisted triage and ticket deflection Forward-looking · Power Automate · Microsoft Graph
01 Endpoint Architecture Microsoft Intune Conditional Access Representative

Migrating an enterprise fleet from AirWatch to Microsoft Intune.

Phased migration of a mixed VMware Workspace ONE and SCCM estate to Microsoft Intune as the single endpoint management plane, with conditional access enforcement, autopilot enrollment, and a documented compliance baseline.

Role
Migration support owner · cutover & runbook
Scope
Windows 10/11 fleet · multi-region cohorts
Stakeholders
IT Asset · InfoSec · Compliance · Service Desk
Headline
Compliance baseline reached · ticket volume reduced
Context

The estate consisted of devices managed across multiple consoles: VMware Workspace ONE for the majority of corporate-owned Windows 10/11 endpoints, legacy SCCM-managed machines that had not yet been migrated, and a tail of unmanaged BYOD devices accessing corporate email. Conditional access policies existed in Azure AD but were inconsistently applied. Patch and compliance reporting were fragmented across consoles, generating recurring audit findings on the same control areas each quarter.

Problem

Three operational issues followed from the multi-console state. First, no single source of truth for device posture meant InfoSec, Service Desk, and Asset Management reconciled state from separate dashboards. Second, conditional access blocks were difficult to diagnose when the device record in Azure AD did not match the device record in the MDM. Third, compliance reporting consistently understated coverage gaps because each console reported only on its own scope.

Users & Stakeholders
  • End users across regional offices
  • IT Service Desk (Tier 1 / Tier 2)
  • Information Security — conditional access and baseline policy
  • IT Asset Management — lifecycle and reporting
  • Compliance and audit teams
  • Sponsor: IT Operations Lead
Goals
  • Consolidate to Intune as the single endpoint management plane
  • Enforce conditional access uniformly across in-scope cohorts
  • Establish a real-time compliance baseline visible to auditors
  • Update Service Desk runbooks and complete Tier 1 training
  • Maintain user productivity throughout cutover windows
Role & Responsibilities

Migration support owner on the workstream. Co-authored the cutover runbook with the InfoSec engineer and Asset Management lead, owned end-user communications, handled Tier-2 escalations, and managed the post-cutover stabilization period for each cohort. Architectural decisions were sponsor-approved through the Change Advisory Board.

Architecture — endpoint management plane consolidation
Before · fragmented
VMware Workspace ONE
SCCM (legacy)
Unmanaged BYOD
Endpoint estate 3 consoles · split reporting · CA blocks hard to diagnose
After · unified plane
Microsoft Intune
Azure AD · Conditional Access
Autopilot · Compliance baseline
Endpoint estate 1 plane · single posture source · uniform CA · audit-ready
Target-state diagram. Conditional access enforced before application deployment to address the audit’s highest priority first.
Process
  1. 01
    Inventory and scope reconciliation

    Reconciled three sources of truth (Azure AD, Workspace ONE, SCCM) into a unified in-scope list. Devices inactive for over ninety days were segregated into a separate decommissioning workstream.

    2 wks
  2. 02
    Pilot ring

    Enrolled approximately fifty IT and Service Desk devices first to validate the enrollment flow, comms cadence, and common failure modes prior to engaging business users.

    3 wks
  3. 03
    Baseline policy authoring

    Compliance policies, configuration profiles, and conditional access rules defined with InfoSec. Each exception was assigned an owner and a review date in advance of cohort rollout.

    2 wks
  4. 04
    Cohort cutover

    Sequenced by region and user-impact tolerance. Each cohort had a comms plan, Service Desk staffing plan, 48-hour stabilization watch, and pre-defined pause criteria.

    cont.
  5. 05
    Decommissioning

    Workspace ONE agent removal performed only after the cohort cleared the compliance baseline threshold, with a defined parallel-run window before the legacy console was retired.

    per cohort
  6. 06
    Runbook handover and training

    Service Desk runbook rewritten and accompanied by recorded walkthroughs. Tier-1 training completed before the cutover team disengaged from each cohort.

    ongoing
Cutover runbook · Cohort R3 UK & IE office Friday window illustrative
CUTOVER — COHORT R3 · UK & IE OFFICE T-7 days Comms email + Teams pin. Self-help KB linked. Service Desk staffing confirmed. T-3 days Pre-flight check — AAD device hygiene, BitLocker recovery key escrow. T-1 day Final notice. Out-of-office cover roster confirmed. T-0 Friday 18:00. Cohort moved to Intune scope. Workspace ONE agent retained. T+1 hr Spot checks — conditional access, OneDrive sync, BitLocker, autopilot reset on a sample. T+24 hr Service Desk huddle — ticket categorization and FAQ update. T+7 days Compliance baseline review — cohort must clear ≥ 95% before agent removal. T+14 days Workspace ONE agent removed. Cohort closed. PAUSE CRITERIA · Conditional-access lockout rate > 1.5% of cohort sustained for > 2 hours · > 3 sev-2 incidents linked to the cutover within first 24h · Any sev-1 (loss of access to email, OneDrive, or core line-of-business application) OWNERS Support lead · InfoSec engineer · Service Desk shift lead · Asset Management # The runbook is the operational contract. Deviations are logged and reviewed. # Out-of-scope items become candidates at the next change advisory window.
Reviewed: Support Lead · InfoSec · Service Desk Lead · Change Advisory Board
Prioritization

Workstreams scored on impact, audit risk, and effort against the InfoSec baseline. Conditional access enforcement sequenced first because it addressed the highest-priority audit finding. Autopilot enrollment followed to reduce repetitive Service Desk effort. Application deployment via Company Portal was scheduled after the baseline stabilized. Cosmetic items (custom branding, niche scripts) were deferred to a clearly identified next-horizon backlog.

Key Decisions & Trade-offs
  • Conditional access enforced before application deployment.

    Sequenced identity controls ahead of user-facing capability to reduce diagnostic complexity downstream and address the audit’s highest priority first.

  • Cohort migration rather than tenant-wide cutover.

    Slower in aggregate, but produced incremental runbook and FAQ refinements that compounded across subsequent cohorts.

  • Parallel-run window before agent removal.

    Carried the operational cost of two consoles for a defined period per cohort. Mitigated rollback risk and increased Change Advisory Board confidence.

  • Named, time-bound exceptions.

    Each device or policy exception was assigned an owner and a review date. Avoided silent exclusions accumulating into hidden technical debt.

Outcomes
≥95%
Compliance baseline across cohorts within 7 days of cutover
Reduction in endpoint-related ticket volume after stabilization
1
Single endpoint management plane established
0
Sev-1 incidents linked to cutover windows
Tools
  • Microsoft Intune
  • Azure AD
  • Conditional Access
  • VMware Workspace ONE
  • Windows 10/11
  • Autopilot
  • BitLocker
  • ServiceNow
  • MS Teams
Lessons
  • Operational migrations are runbook-led; documented procedures determine cutover predictability more than tooling capability.
  • Communications cadence shapes user perception of disruption independently of the technical disruption itself.
  • Silent failure modes (sync drift, conditional-access blocks) require dedicated instrumentation; loud failures self-report.
02 Network Architecture OSPF · BGP · MPLS Incident Response Representative

Multi-site network instability — root cause analysis and remediation.

Tier-2 incident response on a multi-site enterprise WAN: containment, structured layer-by-layer diagnosis, recovery, and a postmortem with tracked action items that closed within stated ETAs.

Role
Tier-2 incident responder
Scope
Multi-site WAN · MPLS core, OSPF/BGP edges
Stakeholders
Network Ops · Service Desk · Site users
Headline
Service restored within window · postmortem actions closed
Context

Multi-site enterprise WAN with an MPLS core, OSPF for intra-campus routing, BGP at carrier-edge handoffs, and EIGRP retained on a small number of legacy branches scheduled for migration. Steady-state operation, with monitoring on link state, routing adjacencies, and bandwidth utilisation. Incident initiated when Service Desk received correlated complaints from three sites within a fifteen-minute window.

Problem

Symptoms presented as routing instability: OSPF adjacency flaps on a campus edge router and BGP path attribute changes affecting multiple downstream prefixes. Initial diagnostic attention focused on the routing layer. Underlying cause was a degrading SFP transceiver on a primary uplink; recovery thresholds were short enough that the link reported as up on dashboards while error counters climbed. Layer-1 inspection identified the cause; transceiver replacement on the standby path resolved the incident.

Users & Stakeholders
  • Site users at three affected locations
  • Network Operations engineering
  • Service Desk — user-facing communication
  • Carrier NOC — for BGP edge correlation
  • Sponsor: IT Operations Lead, post-incident
Goals
  • Restore service within the change window
  • Establish a single, evidenced root cause
  • Close all postmortem action items within stated ETAs
  • Update runbook to prevent diagnostic-path repetition
Role & Responsibilities

Tier-2 responder. Joined the bridge call, drove the diagnostic loop, captured the timeline for the postmortem, and owned the runbook update. Worked alongside the Network Engineering on-call who held routing-platform expertise.

Topology — multi-site WAN
Symptoms surfaced at the routing layer (OSPF flaps, BGP path changes). Cause was at the physical interface — a degrading transceiver whose recovery thresholds kept the link reporting up.
Diagnostic order — bottom-up OSI inspection
L7 Application user complaint surfaces here — voice quality, app timeouts
L4 Transport TCP retransmits visible in flow data
L3 Routing OSPF / BGP flaps — initial diagnostic focus, distractor in this incident
L2 Data link VLAN, CDP/LLDP integrity, MAC table churn
L1 Physical SFP TX-error counters climbing — root cause confirmed at L1
Runbook update post-incident: layer-1 inspection precedes routing analysis for symptoms presenting as WAN flaps. Predictive SFP TX-error threshold added to NetOps monitoring.
Process — containment, diagnosis, recovery
Incident postmortem · INC-PM-204 Wednesday window blameless illustrative
INCIDENT POSTMORTEM — WAN INSTABILITY SUMMARY Single uplink transceiver flap caused repeated OSPF / BGP re-convergence across three sites. Symptoms presented at routing layer; cause was layer 1. TIMELINE 14:02 Service Desk: voice quality complaints from Site B. 14:08 Bridge opened. NetOps + Tier-2 on call. 14:11 OSPF adjacency flap visible on edge router. Initial focus: routing. 14:24 Interface error counters reviewed. Single uplink shows TX errors climbing. 14:27 Transceiver replaced on standby pair; uplink stabilizes. 14:31 Routing reconverges. Spot checks across all three affected sites pass. 14:45 Incident closed. Service Desk notified. ROOT CAUSE Failing SFP transceiver on a primary uplink. Recovery thresholds were short enough that the link reported up on dashboards. CONTRIBUTING FACTORS · Transceiver predictive-failure metric not in standard alerting. · Initial diagnostic focus on routing layer added approximately 20 minutes. · No Service Desk-facing status update for the first 18 minutes. ACTION ITEMS A1 Add SFP TX-error threshold alert to NetOps monitoring. Owner: NetOps ETA: 14d A2 Update WAN runbook: layer-1 checks before routing analysis. Owner: Tier-2 ETA: 7d A3 Service Desk comms template for in-progress updates. Owner: SD Lead ETA: 7d A4 Review SFP failure rate on the affected vendor batch. Owner: NetOps ETA: 30d # Postmortem is blameless. Names appear only against forward action items. # Filed in the runbook library; runbook updated with layer-1-first directive.
Reviewed: Tier-2 (lead) · NetOps Engineering · Service Desk Lead · IT Operations
Key Decisions & Trade-offs
  • Layer-1 inspection before routing analysis.

    Subsequent runbook update codifies bottom-up OSI inspection as the default diagnostic path for symptoms presenting as routing instability.

  • Communications cadence during incident.

    Status updates issued every fifteen minutes from incident declaration, even where the content is “investigation in progress.” Reduces parallel investigation by other teams.

  • Blameless postmortem with named action items.

    Names appear only against forward action items, never as cause attribution. Owners commit to ETAs that are tracked to closure.

Prioritization

Service restoration prioritized over root-cause documentation during the active incident window. Full causal analysis was completed in the postmortem rather than during recovery. Mixing recovery and analysis sequencing is a known anti-pattern that extends mean time to recovery.

Outcomes
~30m
Total time to recovery from first complaint
4
Postmortem action items, all closed within stated ETA
L1
Runbook updated to lead with layer-1 checks for WAN flap symptoms
+
Predictive SFP threshold alerting added to monitoring stack
Tools
  • Cisco IOS / IOS-XE
  • OSPF
  • BGP
  • MPLS
  • EIGRP
  • SolarWinds / NetFlow
  • Wireshark
  • ServiceNow
Lessons
  • Bottom-up OSI inspection is the default diagnostic path; routing-layer symptoms frequently surface lower-layer causes.
  • Communications cadence during incidents reduces parallel investigation by other teams.
  • Postmortem value depends on action item closure tracking, not on document quality alone.
Building Toward

The next architectural horizon.

AI-assisted operations as a complement to analyst judgement — not a replacement for it. The reference pattern below is what I’m developing in pilot scope, with conservative confidence thresholds, human-in-loop on access-bearing actions, and audit-grade logging from the first deployment. Forward-looking · in-progress capability.

03 AI & Workflow Automation Power Automate Copilot Studio Microsoft Graph Forward-looking · pilot scope

AI-assisted triage and ticket deflection.

Reference architecture for deflecting repeatable Tier-1 tickets through Power Automate flows, Microsoft Graph integration, and a Copilot Studio front-end. Designed with human-in-the-loop controls, conservative confidence thresholds, and full audit logging from day one.

Role
Pilot designer · architecture & prototype
Scope
Top Tier-1 ticket categories · pilot department
Stakeholders
Service Desk · InfoSec · End users
Headline
Deflection on repeatable categories · zero security exceptions
Context

Service Desk ticket analysis showed a long-tail distribution in which a small number of categories accounted for a disproportionate share of Tier-1 effort. Categories included password reset requests, distribution group membership changes, mailbox quota increases within policy, OneDrive sync diagnostics, and basic Microsoft Teams permissions. Each was deterministic, low-judgement, and well-suited to automated handling within defined policy boundaries.

Problem

Tier-1 capacity was consumed by repeatable, low-complexity work, displacing analyst time from higher-value triage and reducing first-time-resolution rates on the genuinely novel issues. Manual handling also introduced inconsistency: identical requests received different resolution paths depending on which analyst handled them. The objective was deflection of deterministic categories with auditable handling, not full replacement of analyst judgement.

Users & Stakeholders
  • End users requesting common services
  • Service Desk Tier 1 / Tier 2
  • InfoSec — security boundary review
  • IT Operations — runbook ownership
  • Compliance — audit logging requirements
Goals
  • Deflect a meaningful share of repeatable Tier-1 categories
  • Maintain or improve first-time resolution on routed-in tickets
  • Operate within InfoSec policy boundaries with no exceptions
  • Capture full audit logging for every automated action
  • Establish a pattern that generalizes to additional categories
Role & Responsibilities

Pilot designer. Drafted the reference architecture, prototyped Power Automate flows and Copilot Studio dialogue against Microsoft Graph and the ServiceNow API, and partnered with the Service Desk lead on scope. Coordinated InfoSec review of the security boundary and Compliance review of logging requirements before any flow ships beyond pilot.

Reference architecture — assist, do not replace

Where the automated path cannot complete with sufficient confidence, the request is routed to a human analyst with all context attached.

Automation decision rubric · v1.1 Pilot InfoSec-reviewed illustrative
CATEGORY ELIGIBILITY — before any flow ships ☐ Deterministic outcome — same input, same correct action ☐ Within InfoSec policy — no privilege elevation beyond user’s entitlement ☐ Audit-loggable — every action attributable to user and flow ☐ Reversible — or carries an explicit human-approval gate ☐ Confidence threshold met — classification accuracy ≥ 95% on backtest HUMAN-IN-LOOP TRIGGERS · Access-bearing actions group membership changes > standard policy · Low confidence classification below threshold — route to analyst · Anomalous request shape deviates from category baseline · Out-of-policy flagged at intake — route + log CAPTURED FOR ONGOING EVALUATION · Classification accuracy per category leading indicator · Deflection rate per category primary outcome metric · False-positive rate precision proxy · User satisfaction post-resolution lagging indicator, survey # The automation augments analyst capacity. It does not replace judgement # on access-bearing or anomalous requests. Logging is non-negotiable.
Reviewed: Automation Lead · Service Desk Lead · InfoSec · Compliance
Prioritization — value vs risk

Categories were scored on volume (deflection value) and policy risk. Password resets within self-service policy scored high on value and low on risk — included with standard Microsoft authentication flows. Distribution group membership for non-sensitive groups scored high on value, moderate on risk — included with analyst-approval gate. Privilege escalation, license assignment changes, and anything touching sensitive groups were excluded from the pilot scope.

Key Decisions & Trade-offs
  • Assist, do not replace.

    Architectural pattern preserves analyst judgement for access-bearing actions. Trades absolute deflection rate for trust and audit defensibility.

  • Conservative confidence thresholds.

    A wrong action is worse than no action. Below-threshold classifications route to a human analyst rather than execute speculatively.

  • Microsoft Graph as the action surface.

    Standardizes on a single, auditable API. Avoids one-off connectors that introduce attack surface and reduce reviewability.

  • Pilot scope explicitly bounded.

    License changes, privilege elevation, and sensitive-group membership excluded from pilot. Each requires its own InfoSec review before consideration.

Outcomes — pilot
deflection
Meaningful share of repeatable Tier-1 categories deflected in pilot
0
InfoSec policy exceptions through pilot period
100%
Automated actions audit-logged with intent and outcome
+
Analyst time freed for higher-value triage
Tools
  • Power Automate
  • Copilot Studio
  • Microsoft Graph
  • Azure AD
  • ServiceNow API
  • Python (eval scripts)
  • Azure Logic Apps
  • OpenAI / Claude APIs (evaluation)
Lessons
  • Automation succeeds where the underlying process is already well-defined; ill-defined manual processes do not become well-defined when automated.
  • Conservative confidence thresholds and human-in-loop on access-bearing actions are necessary for trust and audit defensibility.
  • Standardising on a single, auditable action surface (Microsoft Graph) reduces review burden and attack surface.
  • The architectural pattern generalises — the same assist-not-replace shape applies across many Tier-1 categories.
03

Operating Principles

Six principles I bring to every engagement. The patterns are durable across enterprise IT, SaaS, and financial services environments — the tooling is incidental.

01

Triage Discipline

Severity is determined by impact and urgency, not by request volume. Tickets are classified within the first two replies and re-classified only with customer acknowledgement. The clock is the operational contract; queue order is determined by severity and age, not by FIFO.

From the field The triage matrix used at SHL was iterated three times. The third version was the reference that actually shaped intake behaviour.
02

Communication Cadence

During incidents, status updates are issued at fifteen-minute intervals from declaration, regardless of diagnostic state. Outside incidents, ticket aging is treated as a leading indicator of customer churn — the customer who has gone silent is the priority to contact.

From the field The eighteen-minute communication gap at the start of the WAN incident became a named action item in the postmortem.
03

Root Cause Over Recurrence

A restart resolves the ticket; a root cause prevents the recurrence. Root cause is pursued where the cost of recurrence is material and diagnosis is tractable; workarounds are documented explicitly when the cost of recurrence is low and full diagnosis would be disproportionate.

From the field The Intune migration was justified by recurring ticket categories in Service Desk data, not by tooling preference.
04

Documentation as Deliverable

Runbooks, Knowledge Base entries, and postmortems are primary deliverables, not byproducts. A ticket closed without a documented learning recurs in subsequent quarters. The lowest-cost intervention against the next incident is the runbook update from this one.

From the field The Intune cutover runbook authored in the pilot ring was reused across every cohort — refined with each cohort’s FAQ rather than rewritten.
05

Secure Defaults

Least privilege at the identity layer; conditional access at the device boundary; multi-factor authentication uniformly enforced; sharing links time-bounded by default. Security postures are easier to defend at the design stage than to retrofit. Exceptions are accepted with named owners and documented review dates.

From the field Conditional access enforcement preceded application deployment in the Intune migration. Subsequent rollout was cleaner because the identity perimeter was already firm.
06

Observability

If a failure mode cannot be observed, it cannot be supported. I push for telemetry on conditions that fail quietly — sync drift, transceiver errors, conditional-access blocks, sharing-link sprawl, automation classification accuracy. Loud failures self-report; quiet failures require dedicated instrumentation.

From the field The SFP TX-error threshold added after the WAN postmortem now catches the same failure mode before it presents as an incident.

“Good IT support reaches the point where the user does not notice it. Good architecture reaches the point where the next engineer does not need to ask why.”

— JI
04

Tools & Methods

Cloud & Identity
  • Microsoft Azure & Azure AD
  • Conditional Access & MFA
  • Microsoft 365 Admin
  • AWS & Google Workspace (working knowledge)
Endpoint Management
  • Microsoft Intune
  • VMware AirWatch / Workspace ONE
  • Windows 10 / 11 administration
  • Autopilot & BitLocker
Productivity & Collaboration
  • SharePoint Online
  • Microsoft Teams
  • OneDrive & Outlook
  • Office Applications
Networking
  • Cisco IOS / Switch Administration
  • VLAN & Trunking
  • OSPF · BGP · MPLS · EIGRP
  • NetFlow & Wireshark
AI & Workflow Automation
  • Power Automate & Logic Apps
  • Microsoft Copilot Studio
  • Microsoft Graph API
  • Python · OpenAI / Claude APIs (evaluation)
Service & Methodology
  • ITIL aligned (Incident · Problem · Request · Change)
  • Agile / Scrum / Kanban
  • ServiceNow · JIRA · Confluence
  • PowerShell · Excel · Power Query
05

Contact

Open to the next engagement.

Available for senior IT engineering, infrastructure, and solutions architect roles in regulated enterprise environments. Particular interest in engagements that span cloud, network, and endpoint domains, and in teams investing in AI-assisted automation as a complement to human judgement.

Email
djikporukpo@gmail.com
Phone
07405 477707
Based in
Crawley, RH11 · United Kingdom
·
Education
M.Sc. Civil Engineering · Cisco Networking
·