Skip to content

Java Memory Management

Overview

Java Memory Management is a critical aspect of the Java Virtual Machine (JVM) that handles the allocation and deallocation of memory for Java applications. Unlike languages like C and C++ that require manual memory management, Java provides automatic memory management through garbage collection. Understanding how Java manages memory is essential for developing efficient, high-performance applications, especially for large-scale systems where memory optimization becomes crucial for maintaining responsiveness and minimizing resource consumption.

Prerequisites

  • Basic Java programming knowledge
  • Understanding of JVM concepts
  • Familiarity with object-oriented programming
  • Basic knowledge of data structures and algorithms

Learning Objectives

  • Understand the JVM memory architecture
  • Master how Java allocates and deallocates memory
  • Learn about different garbage collection algorithms and strategies
  • Identify common memory leak patterns and their solutions
  • Explore memory monitoring and profiling tools
  • Apply memory optimization techniques in real-world applications
  • Configure JVM parameters for optimal performance
  • Implement best practices for efficient memory management

Table of Contents

  1. JVM Memory Structure
  2. Object Lifecycle
  3. Garbage Collection Fundamentals
  4. Garbage Collection Algorithms
  5. Memory Leaks
  6. Monitoring and Profiling Tools
  7. JVM Tuning
  8. Memory Optimization Techniques
  9. Performance Best Practices
  10. Special Considerations for Large Applications

JVM Memory Structure

The Java Virtual Machine (JVM) memory structure is divided into several key areas, each with specific purposes and characteristics:

Heap Memory

The heap is the runtime data area where all objects and arrays are allocated. It is created when the JVM starts and may increase or decrease in size during application execution.

JVM Heap Structure (Java 8+):
+---------------------+
|      Heap Memory    |
|                     |
| +----------------+  |
| |  Young         |  |
| |  Generation    |  |
| | +------------+ |  |
| | | Eden Space  | |  |
| | +------------+ |  |
| | | Survivor   | |  |
| | | Spaces     | |  |
| | +------------+ |  |
| +----------------+  |
|                     |
| +----------------+  |
| |     Old        |  |
| |  Generation    |  |
| +----------------+  |
+---------------------+

Young Generation

  • Eden Space: Initial allocation of most objects
  • Survivor Spaces: Two spaces (S0 and S1) for objects that survive garbage collections
  • Objects are promoted from Young to Old generation after surviving a threshold number of GC cycles

Old Generation

  • Contains objects that have persisted for longer periods
  • Subject to less frequent but more thorough garbage collections

Non-Heap Memory

Stack Memory

Each thread has its own stack, which contains method-specific values and references to objects: - Local variables - Method parameters - Method call and return information - Object references

public void methodA() {
    int localVar = 42;        // Stored on stack
    Object obj = new Object(); // Reference on stack, actual object on heap
    methodB(localVar);        // Call information on stack
}

Metaspace (Java 8+)

Replaced PermGen in Java 8 and stores class metadata: - Class definitions - Method bytecode - Static variables - Method tables - Interned strings (in Java 7+)

Code Cache

Stores compiled native code generated by the Just-In-Time (JIT) compiler.

Direct Memory

Memory allocated outside the JVM heap, commonly used for native I/O operations.

Object Lifecycle

Understanding an object's lifecycle in Java is crucial for efficient memory management:

1. Object Creation

When an object is created with the new keyword, the JVM: 1. Allocates memory for the object (usually in Eden space) 2. Initializes instance variables to default values 3. Invokes constructors

// Memory allocation and initialization
Person person = new Person("John", 30);

2. Object Usage

The object remains in memory as long as it's reachable - meaning there's a chain of references from a GC root (like a static field, local variable in an active thread, or JNI reference).

// Object is reachable through the 'person' reference
person.setAge(31);
System.out.println(person.getName());

3. Object Death

An object becomes eligible for garbage collection when it's no longer reachable from any GC roots.

// Object becomes unreachable and eligible for GC
person = null;  // Removing the reference
// or when 'person' goes out of scope

4. Finalization

Before reclaiming memory, the JVM may call the object's finalize() method (though this is not guaranteed and generally discouraged in modern Java).

5. Memory Reclamation

Garbage collection reclaims the memory, making it available for future allocations.

Garbage Collection Fundamentals

Garbage Collection (GC) is the process of automatically reclaiming memory occupied by unused objects.

Core Principles

  1. Identify live objects: Find all objects reachable from GC roots
  2. Remove dead objects: Reclaim memory from unreachable objects

GC Roots

Objects that serve as starting points for the garbage collector's reachability analysis: - Local variables in the stack of any thread - Active Java threads - Static variables - JNI references - References from the JVM for class loading and reflection

GC Process Steps

  1. Marking: Identifies and marks all reachable objects
  2. Sweeping/Compacting: Reclaims memory from unreachable objects and potentially rearranges memory

Stop-the-World Pauses

During certain phases of garbage collection, the JVM suspends all application threads, causing application pauses. These are called "Stop-the-World" (STW) events.

Generational Hypothesis

Java's GC is designed based on the empirical observation that most objects die young. This leads to the generational design with more frequent collections of younger objects.

Garbage Collection Algorithms

The JVM offers several garbage collection algorithms, each with specific strengths and trade-offs:

Serial Collector

  • Single-threaded collector
  • Simple and with low overhead
  • Suitable for small applications or single-processor environments
  • Activated with -XX:+UseSerialGC

Parallel Collector

  • Uses multiple threads for GC operations
  • Faster than Serial for multi-processor systems
  • Still causes stop-the-world pauses
  • Focused on throughput rather than latency
  • Activated with -XX:+UseParallelGC

Concurrent Mark Sweep (CMS) Collector

  • Minimizes pause times by doing most work concurrently with application threads
  • Higher CPU utilization
  • Complex and can suffer from fragmentation
  • Good for applications requiring low latency
  • Activated with -XX:+UseConcMarkSweepGC (deprecated in Java 9+)

Garbage-First (G1) Collector

  • Default collector since Java 9
  • Divides the heap into regions for more efficient collection
  • Aims to meet a user-defined pause time goal
  • Balances throughput and latency
  • Activated with -XX:+UseG1GC

Z Garbage Collector (ZGC)

  • Low-latency collector introduced in Java 11
  • Designed for applications requiring low pause times (< 10ms)
  • Scales well with increasing heap sizes
  • Activated with -XX:+UseZGC

Shenandoah Collector

  • Low-pause collector with concurrent compaction
  • Similar goals to ZGC but with different implementation
  • Activated with -XX:+UseShenandoahGC

Comparison Table

Collector Pause Time Throughput Memory Overhead Heap Size
Serial High Low Low Small
Parallel Medium High Low Medium
CMS Low Medium High Medium
G1 Low Medium Medium Large
ZGC Very Low Medium High Very Large
Shenandoah Very Low Medium High Very Large

Memory Leaks

Even though Java has automatic garbage collection, memory leaks can still occur when objects remain referenced but aren't actually needed.

Common Causes of Memory Leaks

1. Unclosed Resources

public void processFile(String path) throws IOException {
    // LEAK: FileInputStream is never closed
    FileInputStream fis = new FileInputStream(path);
    // process file...

    // FIX: use try-with-resources
    try (FileInputStream fis2 = new FileInputStream(path)) {
        // process file...
    }
}

2. Static Collections

// LEAK: Static collection that grows unbounded
public class Cache {
    private static final Map<String, Data> cache = new HashMap<>();

    public static void store(String key, Data data) {
        cache.put(key, data);  // Objects are never removed
    }

    // FIX: Use WeakHashMap or implement explicit cleanup
    private static final Map<String, Data> betterCache = 
        Collections.synchronizedMap(new WeakHashMap<>());
}

3. Improper equals/hashCode Implementation

// LEAK: Objects with changing hash codes in HashMaps
public class MutableKey {
    private int id;

    // PROBLEM: hashCode depends on mutable field
    @Override
    public int hashCode() {
        return id;
    }

    public void setId(int id) {
        this.id = id;  // Changes hash code
    }

    // FIX: Use immutable fields for hash code or don't use as keys
}

4. Inner Class References

// LEAK: Non-static inner class holds implicit reference to outer instance
public class Outer {
    private byte[] largeArray = new byte[1000000];

    public Object getInnerInstance() {
        // Inner instance holds reference to Outer
        return new Inner();
    }

    private class Inner {}

    // FIX: Make inner class static to avoid reference to outer
    private static class StaticInner {}
}

5. Thread Local Variables

// LEAK: ThreadLocal variables not removed
private static ThreadLocal<LargeObject> threadLocal = new ThreadLocal<>();

public void process() {
    threadLocal.set(new LargeObject());
    // process...
    // MISSING: threadLocal.remove();  
}

// FIX: Always call remove() when done with ThreadLocal

Detecting Memory Leaks

  1. Monitoring heap usage over time
  2. Taking heap dumps with tools like jmap, VisualVM, or JConsole
  3. Analyzing heap dumps with tools like Eclipse Memory Analyzer (MAT)
  4. Watching for symptoms like OutOfMemoryError, increasing GC time, or degrading performance

Heap Dump Analysis Example

  1. Take a heap dump: jmap -dump:format=b,file=heap.hprof <pid>
  2. Analyze with Eclipse MAT:
  3. Look for dominator objects
  4. Check for large collections
  5. Analyze retained heap size
  6. Investigate suspicious reference paths

Monitoring and Profiling Tools

Command-Line Tools

jstat

Monitors JVM statistics:

jstat -gc <pid> 1000 10  # GC stats every 1 second, 10 times

jmap

Takes heap dumps and shows memory statistics:

jmap -heap <pid>  # Show heap summary
jmap -dump:format=b,file=heap.hprof <pid>  # Create heap dump

jstack

Prints thread stack traces:

jstack <pid>  # Get thread dump

jcmd

Diagnostic tool with multiple functions:

jcmd <pid> GC.heap_info  # Heap information
jcmd <pid> Thread.print   # Thread dump
jcmd <pid> GC.class_histogram  # Class histogram

Visual Tools

Java Mission Control (JMC) & Flight Recorder (JFR)

Provides low-overhead profiling of CPU, memory, and more:

# Start recording
jcmd <pid> JFR.start settings=profile duration=60s filename=recording.jfr

VisualVM

All-in-one monitoring and profiling tool with plugins: - CPU profiling - Memory profiling - Thread monitoring - Heap dumps

Eclipse Memory Analyzer (MAT)

Specialized tool for heap dump analysis: - Memory leak detection - Object retention analysis - Comparison of multiple heap dumps

YourKit Java Profiler

Commercial profiler with advanced features: - CPU profiling - Memory profiling - Thread profiling - SQL query analysis

Async Profiler

Low-overhead sampling profiler:

./profiler.sh -d 30 -f profile.html <pid>  # 30 second CPU profile

Metrics and APM Systems

  • JMX: Java Management Extensions for exposing metrics
  • Micrometer: Application metrics facade
  • Prometheus + Grafana: Metrics collection and visualization
  • New Relic/Dynatrace/AppDynamics: Commercial APM solutions

JVM Tuning

Heap Size Configuration

# Set initial and maximum heap size
java -Xms2g -Xmx8g -jar application.jar

# Set survivor ratio
java -XX:SurvivorRatio=8 -jar application.jar

# Set young generation size
java -Xmn1g -jar application.jar

Garbage Collector Selection

# Use G1GC (default in Java 9+)
java -XX:+UseG1GC -jar application.jar

# Use ZGC for ultra-low latency
java -XX:+UseZGC -jar application.jar

GC Tuning Parameters

# G1GC pause time goal (milliseconds)
java -XX:MaxGCPauseMillis=200 -jar application.jar

# Percentage of heap to use for young generation
java -XX:NewRatio=2 -jar application.jar  # 1:2 young:old

# CMS initiating occupancy fraction
java -XX:CMSInitiatingOccupancyFraction=70 -jar application.jar
# Print GC details
java -XX:+PrintGCDetails -jar application.jar

# Print GC timestamps
java -XX:+PrintGCDateStamps -jar application.jar

# Log GC to file (Java 9+)
java -Xlog:gc*:file=gc.log:time,uptime:filecount=5,filesize=10m -jar application.jar

# Metaspace size
java -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m -jar application.jar

Stack Size Configuration

# Set thread stack size
java -Xss256k -jar application.jar

Common JVM Tuning Scenarios

High-Throughput Batch Processing

java -Xms4g -Xmx4g -XX:+UseParallelGC -XX:ParallelGCThreads=8 -jar batch.jar

Low-Latency Web Application

java -Xms2g -Xmx2g -XX:+UseG1GC -XX:MaxGCPauseMillis=50 -jar webapp.jar

Large In-Memory Database

java -Xms16g -Xmx16g -XX:+UseG1GC -XX:G1HeapRegionSize=16m -jar database.jar

Memory Optimization Techniques

Object Pooling

For expensive-to-create or frequently used objects:

public class ConnectionPool {
    private final BlockingQueue<Connection> pool;

    public ConnectionPool(int size) {
        pool = new ArrayBlockingQueue<>(size);
        for (int i = 0; i < size; i++) {
            pool.add(createConnection());
        }
    }

    public Connection getConnection() throws InterruptedException {
        return pool.take();
    }

    public void releaseConnection(Connection conn) {
        pool.offer(conn);
    }

    private Connection createConnection() {
        // Create and return a new database connection
    }
}

Lazy Initialization

Defer object creation until needed:

public class ExpensiveResource {
    private static class Holder {
        static final ExpensiveResource INSTANCE = new ExpensiveResource();
    }

    // Lazy initialization using class holder pattern
    public static ExpensiveResource getInstance() {
        return Holder.INSTANCE;
    }
}

Caching

Store results of expensive operations:

public class DataService {
    private final Map<String, Data> cache = new ConcurrentHashMap<>();

    public Data getData(String key) {
        // Check cache first
        return cache.computeIfAbsent(key, this::loadData);
    }

    private Data loadData(String key) {
        // Expensive operation to load data
    }
}

Reducing Object Size

Minimize the memory footprint of frequently instantiated objects:

// BEFORE: 32+ bytes per instance (header + 3 refs + padding)
class Customer {
    private String firstName;
    private String lastName;
    private String email;
}

// AFTER: Combine related data into a single object
class CustomerInfo {
    private final String[] data; // firstName, lastName, email

    public CustomerInfo(String firstName, String lastName, String email) {
        this.data = new String[]{firstName, lastName, email};
    }

    public String getFirstName() { return data[0]; }
    public String getLastName() { return data[1]; }
    public String getEmail() { return data[2]; }
}

Using Primitive Arrays

Prefer primitive arrays over collections for large datasets:

// Less memory efficient
List<Integer> list = new ArrayList<>(1000000);
for (int i = 0; i < 1000000; i++) {
    list.add(i);  // Autoboxing to Integer objects
}

// More memory efficient
int[] array = new int[1000000];
for (int i = 0; i < 1000000; i++) {
    array[i] = i;  // No boxing, just primitives
}

Flyweight Pattern

Share common parts of objects:

public class CharacterFactory {
    private static final Character[] cache = new Character[128];

    static {
        for (int i = 0; i < cache.length; i++) {
            cache[i] = new Character((char) i);
        }
    }

    public static Character getCharacter(char c) {
        if (c < 128) {
            return cache[c];
        } else {
            return new Character(c);
        }
    }

    public static class Character {
        private final char value;

        private Character(char value) {
            this.value = value;
        }

        public char getValue() {
            return value;
        }
    }
}

String Interning

Use string interning for frequently used strings:

String s1 = new String("hello").intern();
String s2 = "hello";
System.out.println(s1 == s2);  // true, same reference

Soft/Weak References

Use for caching that adjusts to memory pressure:

public class SoftCache<K, V> {
    private final Map<K, SoftReference<V>> cache = new ConcurrentHashMap<>();

    public V get(K key) {
        SoftReference<V> ref = cache.get(key);
        if (ref != null) {
            V value = ref.get();
            if (value != null) {
                return value;
            } else {
                // Value was garbage collected
                cache.remove(key);
            }
        }
        return null;
    }

    public void put(K key, V value) {
        cache.put(key, new SoftReference<>(value));
    }
}

Off-Heap Storage

For very large data sets that exceed heap capacity:

// Using ByteBuffer for direct (off-heap) memory
ByteBuffer buffer = ByteBuffer.allocateDirect(1024 * 1024 * 1024); // 1GB

// Write data
buffer.putInt(0, 42);

// Read data
int value = buffer.getInt(0);

Performance Best Practices

Memory Allocation

  1. Minimize object creation in critical paths:

    // Inefficient: creates objects in a loop
    for (int i = 0; i < 1000000; i++) {
        String s = "Value: " + i;  // Creates new String each iteration
        process(s);
    }
    
    // Better: reuse objects
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < 1000000; i++) {
        sb.setLength(0);
        sb.append("Value: ").append(i);
        process(sb.toString());
    }
    

  2. Prefer bulk operations:

    // Less efficient: individual adds
    List<String> list = new ArrayList<>();
    for (String item : items) {
        list.add(item);
    }
    
    // More efficient: bulk operation
    List<String> list = new ArrayList<>(Arrays.asList(items));
    

  3. Pre-size collections:

    // Without pre-sizing: multiple resizing operations
    Map<String, User> users = new HashMap<>();  // Default initial capacity
    
    // With pre-sizing: avoids resizing
    Map<String, User> users = new HashMap<>(expectedSize);
    

  4. Consider object allocation rate:

  5. High allocation rates trigger more frequent GC
  6. Reduce temporary object creation
  7. Use object pooling for expensive objects

Collection Efficiency

  1. Choose the right collection:
  2. ArrayList for random access
  3. LinkedList for frequent insertions/deletions
  4. HashSet for fast membership testing
  5. EnumSet for enum-based sets (very memory efficient)

  6. Use specialized collections for primitives:

    // Standard collection with boxing overhead
    List<Integer> numbers = new ArrayList<>();
    
    // Specialized primitive collection (e.g., with Trove or Fastutil)
    TIntList numbers = new TIntArrayList();
    

  7. Clear collections proactively:

    // Help GC by clearing when done
    largeTemporaryList.clear();
    

Resource Management

  1. Always close resources:

    // Using try-with-resources
    try (InputStream in = new FileInputStream(file);
         OutputStream out = new FileOutputStream(output)) {
        // Use resources
    }
    

  2. Dispose of native resources explicitly:

    // Native resource releasing
    BufferedImage image = createLargeImage();
    // Use image...
    image.flush();  // Release native resources
    

  3. Manage thread lifecycle:

  4. Shut down thread pools when no longer needed
  5. Use daemon threads for background services
  6. Prefer higher-level abstractions like ExecutorService

Efficient String Handling

  1. Use StringBuilder for concatenation:

    // Inefficient: creates multiple temporary strings
    String result = "";
    for (int i = 0; i < 100; i++) {
        result += i;
    }
    
    // Efficient: reuses the same buffer
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < 100; i++) {
        sb.append(i);
    }
    String result = sb.toString();
    

  2. Avoid unnecessary string conversions:

    // Inefficient: unnecessary conversion
    if (string.toString().equals("value")) { ... }
    
    // Efficient: direct comparison
    if (string.equals("value")) { ... }
    

Memory-Conscious Algorithms

  1. Streaming for large datasets:

    // Memory intensive: loads all data at once
    List<Record> records = loadAllRecords();
    List<Result> results = process(records);
    
    // Memory efficient: processes one record at a time
    try (Stream<Record> recordStream = streamRecords()) {
        return recordStream
            .map(this::processRecord)
            .collect(Collectors.toList());
    }
    

  2. Pagination for large result sets:

    // Instead of fetching all records at once
    List<Record> records = repository.findAllByUserId(userId);
    
    // Use pagination
    Page<Record> page = repository.findByUserId(userId, PageRequest.of(0, 100));
    

  3. In-place modifications:

    // Sorts in place without creating new arrays
    Arrays.sort(data);
    
    // In-place collection operations
    Collections.sort(list);
    

Special Considerations for Large Applications

Managing Large Heaps

  1. Heap size considerations:
  2. Larger heaps allow more caching and reduce GC frequency
  3. But increase GC pause duration (except with ZGC/Shenandoah)
  4. Find balance between throughput and latency requirements

  5. Tuning for large heaps:

    # G1GC for large heaps
    java -Xms10g -Xmx10g -XX:+UseG1GC -XX:G1HeapRegionSize=32m -jar app.jar
    
    # ZGC for large heaps with low latency
    java -Xms20g -Xmx20g -XX:+UseZGC -XX:ZAllocationSpikeTolerance=2.0 -jar app.jar
    

  6. Distributed caching:

  7. Use external caching systems like Redis or Memcached
  8. Offload memory pressure from the JVM heap

Multi-Tenant Applications

  1. Isolation strategies:
  2. Separate classloaders per tenant
  3. Tenant-specific thread pools
  4. Memory quotas per tenant

  5. Resource monitoring:

  6. Track memory usage per tenant
  7. Implement circuit breakers for runaway tenants

Memory Usage in Frameworks

  1. Web frameworks:
  2. Session size limitations
  3. Request/response buffering strategies
  4. Connection pooling configuration

  5. ORM frameworks:

  6. Entity caching configuration
  7. Batch processing for large datasets
  8. Lazy loading vs eager fetching

  9. Message brokers:

  10. Consumer prefetch settings
  11. Producer buffering configuration
  12. Dead letter handling

Containerized Applications

  1. Container memory limits:

    # Set JVM to use container memory limits (Java 11+)
    java -XX:+UseContainerSupport -jar app.jar
    

  2. Memory reservation strategies:

  3. Reserve memory for non-heap usages
  4. Account for off-heap memory usage
  5. Reserve memory for native code

  6. Kubernetes considerations:

    resources:
      requests:
        memory: "1Gi"
      limits:
        memory: "2Gi"
    

Best Practices and Common Pitfalls

Memory Management Best Practices

  1. Design for predictable memory usage:
  2. Understand your application's memory profile
  3. Set upper bounds on caches and collections
  4. Design data structures with memory efficiency in mind

  5. Regular monitoring and profiling:

  6. Implement memory usage monitoring
  7. Schedule periodic profiling
  8. Establish baselines and alert on deviations

  9. Preemptive memory leak detection:

  10. Code reviews focused on potential leaks
  11. Memory leak detection in testing environments
  12. Historical trend analysis of memory usage

  13. Documentation and knowledge sharing:

  14. Document memory tuning parameters
  15. Share memory optimization techniques
  16. Create run books for memory-related incidents

Common Pitfalls to Avoid

  1. Premature optimization:
  2. Focus on the areas with proven memory issues
  3. Measure before and after optimization
  4. Balance code readability with optimization

  5. Excessive tuning:

  6. Too many JVM flags can cause unpredictable behavior
  7. Tune important parameters first, then refine
  8. Document reasons for each tuning parameter

  9. Ignoring non-heap memory:

  10. DirectBuffer allocation
  11. Native library memory usage
  12. Memory-mapped files

  13. One-size-fits-all solutions:

  14. GC tuning is workload-dependent
  15. Development vs. production settings may differ
  16. Different applications have different memory profiles

  17. Missing holistic view:

  18. Memory is just one system resource
  19. Consider interaction with CPU, disk, network, etc.
  20. Look at end-to-end system performance

Resources for Further Learning

  1. Official Documentation:
  2. JVM Guide
  3. Memory Management Whitepaper
  4. JDK Tools Reference

  5. Books:

  6. "Java Performance: The Definitive Guide" by Scott Oaks
  7. "Optimizing Java" by Benjamin Evans, James Gough, and Chris Newland
  8. "Java Performance Companion" by Charlie Hunt et al.

  9. Online Resources:

  10. GC Handbook
  11. Baeldung Memory Management Articles
  12. Java Performance Tuning

  13. Tools Documentation:

  14. Eclipse Memory Analyzer
  15. Java Mission Control
  16. VisualVM Documentation

Practice Exercises

  1. Memory Leak Detection: Create a program with a deliberate memory leak, then use profiling tools to detect and fix it.

  2. GC Tuning Comparison: Experiment with different GC algorithms and settings for a simple benchmark application.

  3. Memory Footprint Optimization: Optimize a data structure for memory efficiency while maintaining performance.

  4. Heap Dump Analysis: Analyze a provided heap dump to identify memory usage patterns and potential issues.

  5. Off-Heap Storage Implementation: Design a cache that uses direct ByteBuffers for large data storage.

  6. Resource Pool Implementation: Create a reusable resource pool with configurable capacity and timeout handling.

  7. Memory Monitoring Dashboard: Set up a Prometheus/Grafana dashboard for JVM memory metrics.

  8. Cache Eviction Strategies: Implement and compare different cache eviction policies (LRU, LFU, time-based).

  9. Large Dataset Processing: Implement memory-efficient processing of a large dataset using streams and pagination.

  10. Container Memory Optimization: Configure a Spring Boot application for optimal memory usage in a container environment.